summaryrefslogtreecommitdiffstats
path: root/llvm/lib/Transforms/Utils
Commit message (Collapse)AuthorAgeFilesLines
...
* [LCSSA] Don't insert tokens into the worklist at all.Davide Italiano2017-04-171-7/+8
| | | | | | | We're gonna skip them anyway, so there's no point in inserting them in the first place. llvm-svn: 300452
* [LoopPeeling] Get rid of Phis that become invariant after N stepsMax Kazantsev2017-04-171-20/+83
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch is a generalization of the improvement introduced in rL296898. Previously, we were able to peel one iteration of a loop to get rid of a Phi that becomes an invariant on the 2nd iteration. In more general case, if a Phi becomes invariant after N iterations, we can peel N times and turn it into invariant. In order to do this, we for every Phi in loop's header we define the Invariant Depth value which is calculated as follows: Given %x = phi <Inputs from above the loop>, ..., [%y, %back.edge]. If %y is a loop invariant, then Depth(%x) = 1. If %y is a Phi from the loop header, Depth(%x) = Depth(%y) + 1. Otherwise, Depth(%x) is infinite. Notice that if we peel a loop, all Phis with Depth = 1 become invariants, and all other Phis with finite depth decrease the depth by 1. Thus, peeling N first iterations allows us to turn all Phis with Depth <= N into invariants. Reviewers: reames, apilipenko, mkuper, skatkov, anna, sanjoy Reviewed By: sanjoy Subscribers: llvm-commits Differential Revision: https://reviews.llvm.org/D31613 llvm-svn: 300446
* [LoopPeeling] Fix condition for phi-eliminating peelingMax Kazantsev2017-04-171-1/+2
| | | | | | | | | | | | | | | | | When peeling loops basing on phis becoming invariants, we make a wrong loop size check. UP.Threshold should be compared against the total numbers of instructions after the transformation, which is equal to 2 * LoopSize in case of peeling one iteration. We should also check that the maximum allowed number of peeled iterations is not zero. Reviewers: sanjoy, anna, reames, mkuper Reviewed By: mkuper Subscribers: llvm-commits Differential Revision: https://reviews.llvm.org/D31753 llvm-svn: 300441
* [LCSSA] Simplify a loop. NFCI.Davide Italiano2017-04-171-7/+3
| | | | llvm-svn: 300433
* [LCSSA] Fix non-determinism due to iterating over a SmallPtrSet.Davide Italiano2017-04-161-3/+3
| | | | | | Use a SmallSetVector instead. llvm-svn: 300431
* [IR] Make getParamAttributes take argument numbers, not ArgNo+1Reid Kleckner2017-04-131-1/+1
| | | | | | | | | | | | Add hasParamAttribute() and use it instead of hasAttribute(ArgNo+1, Kind) everywhere. The fact that the AttributeList index for an argument is ArgNo+1 should be a hidden implementation detail. NFC llvm-svn: 300272
* [LCSSA] Efficiently compute blocks dominating at least one exit.Davide Italiano2017-04-131-19/+54
| | | | | | | | | | | | | | | | | | | | | | | For LCSSA purposes, loop BBs not dominating any of the exits aren't interesting, as none of the values defined in these blocks can be used outside the loop. The way the code computed this information was by comparing each BB of the loop with each of the exit blocks and ask the dominator tree about their dominance relation. This is slow. A more efficient way, implemented here, is that of starting from the exit blocks and walking the dom upwards until we hit an header. By transitivity, all the blocks we encounter in our path dominate an exit. For the testcase provided in PR31851, this reduces compile time on `opt -O2` by ~25%, going from 1m47s to 1m22s. Thanks to Dan/MichaelZ for discussions/suggesting the approach/review. Differential Revision: https://reviews.llvm.org/D31843 llvm-svn: 300255
* [LCSSA] Assert that we always have a valid loop.Davide Italiano2017-04-131-0/+1
| | | | | | | We could otherwise add BBs not belonging to a loop in `formLCSSA` and later crash when trying to iterate the loop blocks. llvm-svn: 300244
* [LCSSA] Remove spurious whitespaces. NFCI.Davide Italiano2017-04-131-1/+1
| | | | llvm-svn: 300243
* [LCSSA] Use `auto` when the type is obvious. NFCI.Davide Italiano2017-04-131-3/+3
| | | | llvm-svn: 300242
* [LV] Fix the vector code generation for first order recurrenceAnna Thomas2017-04-131-12/+3
| | | | | | | | | | | | | | | | | | | Summary: In first order recurrences where phi's are used outside the loop, we should generate an additional vector.extract of the second last element from the vectorized phi update. This is because we require the phi itself (which is the value at the second last iteration of the vector loop) and not the phi's update within the loop. Also fix the code gen when we just unroll, but don't vectorize. Fixes PR32396. Reviewers: mssimpso, mkuper, anemet Subscribers: llvm-commits, mzolotukhin Differential Revision: https://reviews.llvm.org/D31979 llvm-svn: 300238
* [IR] Take func, ret, and arg attrs separately in AttributeList::getReid Kleckner2017-04-131-10/+7
| | | | | | | | | | | | | This seems like a much more natural API, based on Derek Schuff's comments on r300015. It further hides the implementation detail of AttributeList that function attributes come last and appear at index ~0U, which is easy for the user to screw up. git diff says it saves code as well: 97 insertions(+), 137 deletions(-) This also makes it easier to change the implementation, which I want to do next. llvm-svn: 300153
* [IR] Redesign the case iterator in SwitchInst to actually be an iteratorChandler Carruth2017-04-125-46/+44
| | | | | | | | | | | | | | | | and to expose a handle to represent the actual case rather than having the iterator return a reference to itself. All of this allows the iterator to be used with common STL facilities, standard algorithms, etc. Doing this exposed some missing facilities in the iterator facade that I've fixed and required some work to the actual iterator to fully support the necessary API. Differential Revision: https://reviews.llvm.org/D31548 llvm-svn: 300032
* [IR] Add AttributeSet to hide AttributeSetNode* again, NFCReid Kleckner2017-04-121-7/+5
| | | | | | | | | | | | | | | | | Summary: For now, it just wraps AttributeSetNode*. Eventually, it will hold AvailableAttrs as an inline bitset, and adding and removing enum attributes will be super cheap. This sinks AttributeSetNode back down to lib/IR/AttributeImpl.h. Reviewers: pete, chandlerc Subscribers: llvm-commits, jfb Differential Revision: https://reviews.llvm.org/D31940 llvm-svn: 300014
* [LV] Avoid vectorizing first order recurrence when phi uses are outside loopAnna Thomas2017-04-111-4/+14
| | | | | | | | | | | | | | | | | | In the vectorization of first order recurrence, we vectorize such that the last element in the vector will be the one extracted to pass into the scalar remainder loop. However, this is not true when there is a phi (other than the primary induction variable) is used outside the loop. In such a case, we need the value from the second last iteration (i.e. the phi value), not the last iteration (which would be the phi update). I've added a test case for this. Also see PR32396. A follow up patch would generate the correct code gen for such cases, and turn this vectorization on. Differential Revision: https://reviews.llvm.org/D31910 Reviewers: mssimpso llvm-svn: 299985
* MemorySSA: Move to Analysis, from Transforms/Utils. It's used asDaniel Berlin2017-04-114-2557/+0
| | | | | | | | Analysis, it has Analysis passes, and once NewGVN is made an Analysis, this removes the cross dependency from Analysis to Transform/Utils. NFC. llvm-svn: 299980
* [AddDiscriminators] Assign discriminators to MemIntrinsic calls.Andrea Di Biagio2017-04-111-1/+15
| | | | | | | | | | | | | | | | | | | | | | | Before this patch, pass AddDiscriminators always avoided to assign discriminators to intrinsic calls. This was done mainly for two reasons: 1) We wanted to minimize the number of based discriminators used. 2) We wanted to avoid non-deterministic discriminator assignment for different debug levels. Unfortunately, that approach was problematic for MemIntrinsic calls. MemIntrinsic calls can be split by SROA into loads and stores, and each new load/store instruction would obtain the debug location from the original intrinsic call. If we don't assign a discriminator to MemIntrinsic calls, then we cannot correctly set the discriminator for the newly created loads and stores. This may have a negative impact on the basic block weight computation performed by the SampleLoader. This patch fixes the issue by letting MemIntrinsic calls have a discriminator. Differential Revision: https://reviews.llvm.org/D31900 llvm-svn: 299972
* Module::getOrInsertFunction is using C-style vararg instead of variadic ↵Serge Guelton2017-04-112-20/+19
| | | | | | | | | | | templates. From a user prospective, it forces the use of an annoying nullptr to mark the end of the vararg, and there's not type checking on the arguments. The variadic template is an obvious solution to both issues. Differential Revision: https://reviews.llvm.org/D31070 llvm-svn: 299949
* Revert "Turn some C-style vararg into variadic templates"Diana Picus2017-04-112-19/+20
| | | | | | | This reverts commit r299925 because it broke the buildbots. See e.g. http://lab.llvm.org:8011/builders/clang-cmake-armv7-a15/builds/6008 llvm-svn: 299928
* Turn some C-style vararg into variadic templatesSerge Guelton2017-04-112-20/+19
| | | | | | | | | | | | Module::getOrInsertFunction is using C-style vararg instead of variadic templates. From a user prospective, it forces the use of an annoying nullptr to mark the end of the vararg, and there's not type checking on the arguments. The variadic template is an obvious solution to both issues. llvm-svn: 299925
* Simplify the code and remove dead codeSylvestre Ledru2017-04-111-5/+3
| | | | | | | | | | | | Summary: Fix coverity cid 1374240 Reviewers: dberlin Reviewed By: dberlin Differential Revision: https://reviews.llvm.org/D31928 llvm-svn: 299924
* Reland "[IR] Make AttributeSetNode public, avoid temporary AttributeList copies"Reid Kleckner2017-04-102-11/+14
| | | | | | | | | | | | | | | | | | | | | | | | | This re-lands r299875. I introduced a bug in Clang code responsible for replacing K&R, no prototype declarations with a real function definition with a prototype. The bug was here: // Collect any return attributes from the call. - if (oldAttrs.hasAttributes(llvm::AttributeList::ReturnIndex)) - newAttrs.push_back(llvm::AttributeList::get(newFn->getContext(), - oldAttrs.getRetAttributes())); + newAttrs.push_back(oldAttrs.getRetAttributes()); Previously getRetAttributes() carried AttributeList::ReturnIndex in its AttributeList. Now that we return the AttributeSetNode* directly, it no longer carries that index, and we call this overload with a single node: AttributeList::get(LLVMContext&, ArrayRef<AttributeSetNode*>) That aborted with an assertion on x86_32 targets. I added an explicit triple to the test and added CHECKs to help find issues like this in the future sooner. llvm-svn: 299899
* [MemorySSA] We don't need to compute dominator levels anymore.Davide Italiano2017-04-101-7/+0
| | | | | | Differential Revision: https://reviews.llvm.org/D31818 llvm-svn: 299893
* Allow DataLayout to specify addrspace for allocas.Matt Arsenault2017-04-103-18/+27
| | | | | | | | | | | | | | | | | | | | | | | LLVM makes several assumptions about address space 0. However, alloca is presently constrained to always return this address space. There's no real way to avoid using alloca, so without this there is no way to opt out of these assumptions. The problematic assumptions include: - That the pointer size used for the stack is the same size as the code size pointer, which is also the maximum sized pointer. - That 0 is an invalid, non-dereferencable pointer value. These are problems for AMDGPU because alloca is used to implement the private address space, which uses a 32-bit index as the pointer value. Other pointers are 64-bit and behave more like LLVM's notion of generic address space. By changing the address space used for allocas, we can change our generic pointer type to be LLVM's generic pointer type which does have similar properties. llvm-svn: 299888
* Revert "[asan] Fix dead stripping of globals on Linux."Evgeniy Stepanov2017-04-101-32/+0
| | | | | | This reverts commit r299697, which caused a big increase in object file size. llvm-svn: 299879
* Revert "[IR] Make AttributeSetNode public, avoid temporary AttributeList copies"Reid Kleckner2017-04-102-14/+11
| | | | | | | This reverts r299875. A Linux bot came back with a test failure: http://bb.pgr.jp/builders/test-clang-i686-linux-RA/builds/741/steps/test_clang/logs/Clang%20%3A%3A%20CodeGen__2006-05-19-SingleEltReturn.c llvm-svn: 299878
* [IR] Make AttributeSetNode public, avoid temporary AttributeList copiesReid Kleckner2017-04-102-11/+14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: AttributeList::get(Fn|Ret|Param)Attributes no longer creates a temporary AttributeList just to hide the AttributeSetNode type. I've also added a factory method to create AttributeLists from a parallel array of AttributeSetNodes. I think this simplifies construction of AttributeLists when rewriting function prototypes. Previously we would test if a particular index had attributes, and conditionally add a temporary attribute list to a vector. Now the attribute set vector is parallel to the argument vector already that these passes already construct. My long term vision is to wrap AttributeSetNode* inside an AttributeSet type that holds the enum attributes, but that will come in a follow up change. I haven't done any performance measurements for this change because profiling hasn't shown that any of the affected code is hot. Reviewers: pete, chandlerc, sanjoy, hfinkel Reviewed By: pete Subscribers: jfb, llvm-commits Differential Revision: https://reviews.llvm.org/D31198 llvm-svn: 299875
* MemorySSA: Make lifetime starts defs for mustaliased pointersDaniel Berlin2017-04-101-2/+4
| | | | | | | | | | | | | | | | | | Summary: While we don't want them aliasing with other pointers, there seems to be no point in not having them clobber must-aliased'd pointers. If some day, we split the aliasing and ordering chains, we'd make this not aliasing but an ordering barrier (IE it doesn't affect it's memory, but we can't hoist it above it). Reviewers: hfinkel, george.burgess.iv Subscribers: Prazek, llvm-commits Differential Revision: https://reviews.llvm.org/D31865 llvm-svn: 299865
* [Mem2Reg] Remove AliasSetTracker updating logic from the pass.Davide Italiano2017-04-092-39/+7
| | | | | | No caller has been passing it for a long time. llvm-svn: 299827
* [MemorySSA] Fix use of pointsToConstantMemory in ↵Hal Finkel2017-04-091-1/+2
| | | | | | | | | | isUseTriviallyOptimizableToLiveOnEntry In isUseTriviallyOptimizableToLiveOnEntry, pointsToConstantMemory needs to be called on the load's pointer operand, not on the result of the load (which might not even be a pointer). llvm-svn: 299823
* AliasAnalysis: Be less conservative about volatile than atomic.Daniel Berlin2017-04-071-2/+24
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: getModRefInfo is meant to answer the question "what impact does this instruction have on a given memory location" (not even another instruction). Long debate on this on IRC comes to the conclusion the answer should be "nothing special". That is, a noalias volatile store does not affect a memory location just by being volatile. Note: DSE and GVN and memdep currently believe this, because memdep just goes behind AA's back after it says "modref" right now. see line 635 of memdep. Prior to this patch we would get modref there, then check aliasing, and if it said noalias, we would continue. getModRefInfo *already* has this same AA check, it just wasn't being used because volatile was lumped in with ordering. (I am separately testing whether this code in memdep is now dead except for the invariant load case) Reviewers: jyknight, chandlerc Subscribers: llvm-commits Differential Revision: https://reviews.llvm.org/D31726 llvm-svn: 299741
* Revert "Turn some C-style vararg into variadic templates"Mehdi Amini2017-04-062-30/+33
| | | | | | This reverts commit r299699, the examples needs to be updated. llvm-svn: 299702
* Turn some C-style vararg into variadic templatesMehdi Amini2017-04-062-33/+30
| | | | | | | | | | | | | | | | Module::getOrInsertFunction is using C-style vararg instead of variadic templates. From a user prospective, it forces the use of an annoying nullptr to mark the end of the vararg, and there's not type checking on the arguments. The variadic template is an obvious solution to both issues. Patch by: Serge Guelton <serge.guelton@telecom-bretagne.eu> Differential Revision: https://reviews.llvm.org/D31070 llvm-svn: 299699
* [asan] Fix dead stripping of globals on Linux.Evgeniy Stepanov2017-04-061-0/+32
| | | | | | | | | | | | | | | | | | | | | | | Use a combination of !associated, comdat, @llvm.compiler.used and custom sections to allow dead stripping of globals and their asan metadata. Sometimes. Currently this works on LLD, which supports SHF_LINK_ORDER with sh_link pointing to the associated section. This also works on BFD, which seems to treat comdats as all-or-nothing with respect to linker GC. There is a weird quirk where the "first" global in each link is never GC-ed because of the section symbols. At this moment it does not work on Gold (as in the globals are never stripped). This is a re-land of r298158 rebased on D31358. This time, asan.module_ctor is put in a comdat as well to avoid quadratic behavior in Gold. llvm-svn: 299697
* [asan] Delay creation of asan ctor.Evgeniy Stepanov2017-04-061-5/+13
| | | | | | | | | | | Create the constructor in the module pass. This in needed for the GC-friendly globals change, where the constructor can be put in a comdat in some cases, but we don't know about that in the function pass. This is a rebase of r298731 which was reverted due to a false alarm. llvm-svn: 299695
* MemorySSA: Remove MemorySSA walker caching.Daniel Berlin2017-04-051-217/+14
| | | | | | | | | | | | | | | | | | | | | | Summary: Remove all the caching the clobber walker does, and that the caching walker does. With the patch to enable storing clobbering access results for stores, i can find no improvement with the cache turned on (and a number of degradations, both time and memory, from the cost of caching. For a large program i have, we do millions of lookups and inserts with zero hits). I haven't tried to rename or simplify the walker otherwise yet. (Appreciate some perf testing on this past my own testing) Reviewers: george.burgess.iv, davide Subscribers: Prazek, llvm-commits Differential Revision: https://reviews.llvm.org/D31576 llvm-svn: 299578
* Re-apply MemorySSA: Add support for caching clobbering access inDaniel Berlin2017-04-042-9/+9
| | | | | | | | | | | | | | | | | | | | stores with some fixes. Summary: This enables us to cache the clobbering access for stores, despite the fact that we can't rewrite the use-def chains themselves. Early testing shows that, after this change, for larger testcases, it will be a significant net positive (memory and time) to remove the walker caching. Reviewers: george.burgess.iv, davide Subscribers: Prazek, llvm-commits Differential Revision: https://reviews.llvm.org/D31567 llvm-svn: 299486
* Revert "MemorySSA: Add support for caching clobbering access in stores"Daniel Berlin2017-04-042-9/+9
| | | | | | This reverts revision r299322. llvm-svn: 299485
* [BypassSlowDivision] Do not bypass division of hash-like valuesNikolai Bozhenov2017-04-021-12/+81
| | | | | | | | | | | | | | | | | Disable bypassing if one of the operands looks like a hash value. Slow division often occurs in hashtable implementations and fast division is never taken there because a hash value is extremely unlikely to have enough upper bits set to zero. A value is considered to be hash-like if it is produced by 1) XOR operation 2) Multiplication by a constant wider than the shorter type 3) PHI node with all incoming values being hash-like Differential Revision: https://reviews.llvm.org/D28200 llvm-svn: 299329
* MemorySSA: Add support for caching clobbering access in storesDaniel Berlin2017-04-022-9/+9
| | | | | | | | | | | | | | | | Summary: This enables us to cache the clobbering access for stores, despite the fact that we can't rewrite the use-def chains themselves. Early testing shows that, after this change, for larger testcases, it will be a significant net positive (memory and time) to remove the walker caching. Reviewers: george.burgess.iv, davide Subscribers: Prazek, llvm-commits Differential Revision: https://reviews.llvm.org/D31567 llvm-svn: 299322
* Move def_chain iterator to MemorySSA.h so it can be reusedDaniel Berlin2017-04-011-36/+0
| | | | llvm-svn: 299297
* MemorySSA: Push const correctness further.Daniel Berlin2017-04-011-8/+10
| | | | llvm-svn: 299295
* MemorySSA: Kill the WalkTargetCache now that we have getBlockDefs.Daniel Berlin2017-04-011-39/+6
| | | | llvm-svn: 299294
* Do not translate rint into nearbyint, but truncate it like nearbyint.Joerg Sonnenberger2017-03-311-1/+2
| | | | | | | | | | | | | | A common way to implement nearbyint is by fiddling with the floating point environment and calling rint. This is used at least by the BSD libm and musl. As such, canonicalizing the latter to the former will create infinite loops for libm and generally pessimize performance, at least when the generic C versions are used. This change preserves the rint in the libcall translation and also handles the domain truncation logic, so that rint with float argument will be reduced to rintf etc. llvm-svn: 299247
* [SimplifyIndvar] Replace the sdiv used by IV if we can prove both of its ↵Hongbin Zheng2017-03-301-4/+38
| | | | | | | | | | operands are non-negative Since there is no sdiv in SCEV, an 'udiv' is a better canonical form than an 'sdiv' as the user of induction variable Differential Revision: https://reviews.llvm.org/D31488 llvm-svn: 299118
* Revert "[asan] Delay creation of asan ctor."Alex Shlyapnikov2017-03-271-13/+5
| | | | | | | | Speculative revert. Some libfuzzer tests are affected. This reverts commit r298731. llvm-svn: 298890
* [LoopUnroll] Remap references in peeled iterationSerge Pavlov2017-03-261-4/+5
| | | | | | | | | References in cloned blocks must be remapped prior to dominator calculation. Differential Revision: https://reviews.llvm.org/D31281 llvm-svn: 298811
* Split the SimplifyCFG pass into two variants.Joerg Sonnenberger2017-03-261-5/+14
| | | | | | | | | | | | | | | | | | | | | | | The first variant contains all current transformations except transforming switches into lookup tables. The second variant contains all current transformations. The switch-to-lookup-table conversion results in code that is more difficult to analyze and optimize by other passes. Most importantly, it can inhibit Dead Code Elimination. As such it is often beneficial to only apply this transformation very late. A common example is inlining, which can often result in range restrictions for the switch expression. Changes in execution time according to LNT: SingleSource/Benchmarks/Misc/fp-convert +3.03% MultiSource/Benchmarks/ASC_Sequoia/CrystalMk/CrystalMk -11.20% MultiSource/Benchmarks/Olden/perimeter/perimeter -10.43% and a couple of smaller changes. For perimeter it also results 2.6% a smaller binary. Differential Revision: https://reviews.llvm.org/D30333 llvm-svn: 298799
* [IR] Make SwitchInst::CaseIt almost a normal iterator.Chandler Carruth2017-03-262-12/+15
| | | | | | | | | | | | | | | | | | | | | | | | | This moves it to the iterator facade utilities giving it full random access semantics, etc. It can also now be used with standard algorithms like std::all_of and std::any_of and range adaptors like llvm::reverse. Also make the semantics of iterating match what every other iterator uses and forbid decrementing past the begin iterator. This was used as a hacky way to work around iterator invalidation. However, every instance trying to do this failed to actually avoid touching invalid iterators despite the clear documentation that the removed and all subsequent iterators become invalid including the end iterator. So I've added a return of the next iterator to removeCase and rewritten the loops that were doing this to correctly follow the iterator pattern of either incremneting or removing and assigning fresh values to the iterator and the end. In one case we were trying to go backwards to make this cleaner but it doesn't actually work. I've made that code match the code we use everywhere else to remove cases as we iterate. This changes the order of cases in one test output and I moved that test to CHECK-DAG so it wouldn't care -- the order isn't semantically meaningful anyways. llvm-svn: 298791
* [asan] Delay creation of asan ctor.Evgeniy Stepanov2017-03-241-5/+13
| | | | | | | | | Create the constructor in the module pass. This in needed for the GC-friendly globals change, where the constructor can be put in a comdat in some cases, but we don't know about that in the function pass. llvm-svn: 298731
OpenPOWER on IntegriCloud