summaryrefslogtreecommitdiffstats
path: root/llvm/lib/Transforms/Scalar/SROA.cpp
Commit message (Collapse)AuthorAgeFilesLines
...
* Fix a case in SROA where lifetime intrinsics could inhibit alloca promotion. InOwen Anderson2014-08-071-0/+4
| | | | | | | | this case, the code path dealing with vector promotion was missing the explicit checks for lifetime intrinsics that were present on the corresponding integer promotion path. llvm-svn: 215148
* AA metadata refactoring (introduce AAMDNodes)Hal Finkel2014-07-241-8/+13
| | | | | | | | | | | | | | | | | | | | In order to enable the preservation of noalias function parameter information after inlining, and the representation of block-level __restrict__ pointer information (etc.), additional kinds of aliasing metadata will be introduced. This metadata needs to be carried around in AliasAnalysis::Location objects (and MMOs at the SDAG level), and so we need to generalize the current scheme (which is hard-coded to just one TBAA MDNode*). This commit introduces only the necessary refactoring to allow for the introduction of other aliasing metadata types, but does not actually introduce any (that will come in a follow-up commit). What it does introduce is a new AAMDNodes structure to hold all of the aliasing metadata nodes associated with a particular memory-accessing instruction, and uses that structure instead of the raw MDNode* in AliasAnalysis::Location, etc. No functionality change intended. llvm-svn: 213859
* Allow isDereferenceablePointer to look through some bitcastsHal Finkel2014-07-101-3/+3
| | | | | | | | | | | | | | | | isDereferenceablePointer should not give up upon encountering any bitcast. If we're casting from a pointer to a larger type to a pointer to a small type, we can continue by examining the bitcast's operand. This missing capability was noted in a comment in the function. In order for this to work, isDereferenceablePointer now takes an optional DataLayout pointer (essentially all callers already had such a pointer available). Most code uses isDereferenceablePointer though isSafeToSpeculativelyExecute (which already took an optional DataLayout pointer), and to enable the LICM test case, LICM needs to actually provide its DL pointer to isSafeToSpeculativelyExecute (which it was not doing previously). llvm-svn: 212686
* SROA: Only split loads on byte boundariesDuncan P. N. Exon Smith2014-06-171-5/+7
| | | | | | | | | | | | | r199771 accidently broke the logic that makes sure that SROA only splits load on byte boundaries. If such a split happens, some bits get lost when reassembling loads of wider types, causing data corruption. Move the width check up to reject such splits early, avoiding the corruption. Fixes PR19250. Patch by: Björn Steinbrink <bsteinbr@gmail.com> llvm-svn: 211082
* [C++] Use 'nullptr'. Transforms edition.Craig Topper2014-04-251-50/+53
| | | | llvm-svn: 207196
* [Modules] Fix potential ODR violations by sinking the DEBUG_TYPEChandler Carruth2014-04-221-1/+2
| | | | | | | | | | | | | | | | | definition below all of the header #include lines, lib/Transforms/... edition. This one is tricky for two reasons. We again have a couple of passes that define something else before the includes as well. I've sunk their name macros with the DEBUG_TYPE. Also, InstCombine contains headers that need DEBUG_TYPE, so now those headers #define and #undef DEBUG_TYPE around their code, leaving them well formed modular headers. Fixing these headers was a large motivation for all of these changes, as "leaky" macros of this form are hard on the modules implementation. llvm-svn: 206844
* [C++11] Add range based accessors for the Use-Def chain of a Value.Chandler Carruth2014-03-091-26/+19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This requires a number of steps. 1) Move value_use_iterator into the Value class as an implementation detail 2) Change it to actually be a *Use* iterator rather than a *User* iterator. 3) Add an adaptor which is a User iterator that always looks through the Use to the User. 4) Wrap these in Value::use_iterator and Value::user_iterator typedefs. 5) Add the range adaptors as Value::uses() and Value::users(). 6) Update *all* of the callers to correctly distinguish between whether they wanted a use_iterator (and to explicitly dig out the User when needed), or a user_iterator which makes the Use itself totally opaque. Because #6 requires churning essentially everything that walked the Use-Def chains, I went ahead and added all of the range adaptors and switched them to range-based loops where appropriate. Also because the renaming requires at least churning every line of code, it didn't make any sense to split these up into multiple commits -- all of which would touch all of the same lies of code. The result is still not quite optimal. The Value::use_iterator is a nice regular iterator, but Value::user_iterator is an iterator over User*s rather than over the User objects themselves. As a consequence, it fits a bit awkwardly into the range-based world and it has the weird extra-dereferencing 'operator->' that so many of our iterators have. I think this could be fixed by providing something which transforms a range of T&s into a range of T*s, but that *can* be separated into another patch, and it isn't yet 100% clear whether this is the right move. However, this change gets us most of the benefit and cleans up a substantial amount of code around Use and User. =] llvm-svn: 203364
* [Layering] Move InstVisitor.h into the IR library as it is prettyChandler Carruth2014-03-061-1/+1
| | | | | | obviously coupled to the IR. llvm-svn: 203064
* [Layering] Move DebugInfo.h into the IR library where its implementationChandler Carruth2014-03-061-1/+1
| | | | | | already lives. llvm-svn: 203046
* [Layering] Move DIBuilder.h into the IR library where its implementationChandler Carruth2014-03-061-1/+1
| | | | | | already lives. llvm-svn: 203038
* [C++11] Add 'override' keyword to virtual methods that override their base ↵Craig Topper2014-03-051-6/+6
| | | | | | class. llvm-svn: 202953
* [C++11] Remove the completely unnecessary requirement on SetVector'sChandler Carruth2014-03-031-1/+1
| | | | | | | | | | | | remove_if that its predicate is adaptable. We don't actually need this, we can write a generic adapter for any predicate. This lets us remove some very wrong std::function usages. We should never be using std::function for predicates to algorithms. This incurs an *indirect* call overhead for every evaluation of the predicate, and makes it very hard to inline through. llvm-svn: 202742
* [C++11] Add two range adaptor views to User: operands andChandler Carruth2014-03-031-6/+5
| | | | | | | | | | | | | | | | | | operand_values. The first provides a range view over operand Use objects, and the second provides a range view over the Value*s being used by those operands. The naming is "STL-style" rather than "LLVM-style" because we have historically named iterator methods STL-style, and range methods seem to have far more in common with their iterator counterparts than with "normal" APIs. Feel free to bikeshed on this one if you want, I'm happy to change these around if people feel strongly. I've switched code in SROA and LCG to exercise these mostly to ensure they work correctly -- we don't really have an easy way to unittest this and they're trivial. llvm-svn: 202687
* [C++11] Replace llvm::tie with std::tie.Benjamin Kramer2014-03-021-2/+2
| | | | | | The old implementation is no longer needed in C++11. llvm-svn: 202644
* [C++11] Replace llvm::next and llvm::prior with std::next and std::prev.Benjamin Kramer2014-03-021-3/+3
| | | | | | Remove the old functions. llvm-svn: 202636
* Now that we have C++11, turn simple functors into lambdas and remove a ton ↵Benjamin Kramer2014-03-011-30/+9
| | | | | | | | of boilerplate. No intended functionality change. llvm-svn: 202588
* [SROA] Use the correct index integer size in GEPs through non-defaultChandler Carruth2014-02-261-5/+10
| | | | | | | | | | | address spaces. This isn't really a correctness issue (the values are truncated) but its much cleaner. Patch by Matt Arsenault! llvm-svn: 202252
* [SROA] Teach SROA how to handle pointers from address spaces other thanChandler Carruth2014-02-261-9/+14
| | | | | | | | | | | | | | | the default. Based on the patch by Matt Arsenault, D1764! I switched one place to use the more direct pointer type to compute the desired address space, and I reworked the memcpy rewriting section to reflect significant refactorings that this patch helped inspire. Thanks to several of the folks who helped review and improve the patch as well. llvm-svn: 202247
* [SROA] Split the alignment computation complete for the memcpy rewritingChandler Carruth2014-02-261-16/+16
| | | | | | | | | | | | | to work independently for the slice side and the other side. This allows us to only compute the minimum of the two when we actually rewrite to a memcpy that needs to take the minimum, and preserve higher alignment for one side or the other when rewriting to loads and stores. This fix was inspired by seeing the result of some refactoring that makes addrspace handling better. llvm-svn: 202242
* [SROA] The original refactoring inspired by the addrspace patch inChandler Carruth2014-02-261-21/+21
| | | | | | | | | | | | | | | | | | | | | | D1764, which in turn set off the other refactorings to make 'getSliceAlign()' a sensible thing. There are two possible inputs to the required alignment of a memory transfer intrinsic: the alignment constraints of the source and the destination. If we are *only* introducing a (potentially new) offset onto one side of the transfer, we don't need to consider the alignment constraints of the other side. Use this to simplify the logic feeding into alignment computation for unsplit transfers. Also, hoist the clamp of the magical zero alignment for these intrinsics to the more customary one alignment early. This lets several other conditions melt away. No functionality changed. There is a further improvement this exposes which *will* change functionality, but that's arriving in a separate patch. llvm-svn: 202232
* [SROA] Yet another slight refactoring that simplifies an API in theChandler Carruth2014-02-261-20/+19
| | | | | | | | | | | | | | | | rewriting logic: don't pass custom offsets for the adjusted pointer to the new alloca. We always passed NewBeginOffset here. Sometimes we spelled it BeginOffset, but only when they were in fact equal. Whats worse, the API is set up so that you can't reasonably call it with anything else -- it assumes that you're passing it an offset relative to the *original* alloca that happens to fall within the new one. That's the whole point of NewBeginOffset, it's the clamped beginning offset. No functionality changed. llvm-svn: 202231
* [SROA] Simplify the computing of alignment: we only ever need theChandler Carruth2014-02-261-30/+16
| | | | | | | | | | | | | | | alignment of the slice being rewritten, not any arbitrary offset. Every caller is really just trying to compute the alignment for the whole slice, never for some arbitrary alignment. They are also just passing a type when they have one to see if we can skip an explicit alignment in the IR by using the type's alignment. This makes for a much simpler interface. Another refactoring inspired by the addrspace patch for SROA, although only loosely related. llvm-svn: 202230
* [SROA] Use NewOffsetBegin in the unsplit case for memset merely forChandler Carruth2014-02-261-3/+4
| | | | | | | | | | | | | | | | | consistency with memcpy rewriting, and fix a latent bug in the alignment management for memset. The alignment issue is that getAdjustedAllocaPtr is computing the *relative* offset into the new alloca, but the alignment isn't being set to the relative offset, it was using the the absolute offset which is into the old alloca. I don't think its possible to write a test case that actually reaches this code where the resulting alignment would be observably different, but the intent was clearly to use the relative offset within the new alloca. llvm-svn: 202229
* [SROA] Use the members for New{Begin,End}Offset in the rewrite helpersChandler Carruth2014-02-261-14/+8
| | | | | | | | | | | | | | | rather than passing them as arguments. While I generally prefer actual arguments, in this case the readability loss is substantial. By using members we avoid repeatedly calculating the offsets, and once we're using members it is useful to ensure that those names *always* refer to the original-alloca-relative new offset for a rewritten slice. No functionality changed. Follow-up refactoring, all toward getting the address space patch merged. llvm-svn: 202228
* [SROA] Compute the New{Begin,End}Offset values once for each allocaChandler Carruth2014-02-261-40/+24
| | | | | | | | | | | | | | slice being rewritten. We had the same code scattered across most of the visits. Instead, compute the new offsets and the slice size once when we start to visit a particular slice, and use the member variables from then on. This reduces quite a bit of code duplication. No functionality changed. Refactoring inspired to make it easier to apply the address space patch to SROA. llvm-svn: 202227
* [SROA] Fix PR18615 with some long overdue simplifications to the boundsChandler Carruth2014-02-261-9/+6
| | | | | | | | | | | | | | | | | | | | | | | | | checking in SROA. The primary change is to just rely on uge for checking that the offset is within the allocation size. This removes the explicit checks against isNegative which were terribly error prone (including the reversed logic that led to PR18615) and prevented us from supporting stack allocations larger than half the address space.... Ok, so maybe the latter isn't *common* but it's a silly restriction to have. Also, we used to try to support a PHI node which loaded from before the start of the allocation if any of the loaded bytes were within the allocation. This doesn't make any sense, we have never really supported loading or storing *before* the allocation starts. The simplified logic just doesn't care. We continue to allow loading past the end of the allocation in part to support cases where there is a PHI and some loads are larger than others and the larger ones reach past the end of the allocation. We could solve this a different and more conservative way, but I'm still somewhat paranoid about this. llvm-svn: 202224
* [SROA] Add an off-by-default *strict* inbounds check to SROA. I had SROAChandler Carruth2014-02-251-0/+42
| | | | | | | | | | implemented this way a long time ago and due to the overwhelming bugs that surfaced, moved to a much more relaxed variant. Richard Smith would like to understand the magnitude of this problem and it seems fairly harmless to keep some flag-controlled logic to get the extremely strict behavior here. I'll remove it if it doesn't prove useful. llvm-svn: 202193
* Make DataLayout a plain object, not a pass.Rafael Espindola2014-02-251-2/+3
| | | | | | | Instead, have a DataLayoutPass that holds one. This will allow parts of LLVM don't don't handle passes to also use DataLayout. llvm-svn: 202168
* [SROA] Use the original load name with the SROA-prefixed IRB rather thanChandler Carruth2014-02-251-2/+2
| | | | | | | | just "load". This helps avoid pointless de-duping with order-sensitive numbers as we already have unique names from the original load. It also makes the resulting IR quite a bit easier to read. llvm-svn: 202140
* [SROA] Thread the ability to add a pointer-specific name prefix throughChandler Carruth2014-02-251-21/+53
| | | | | | | | | | | | | | | | | | | | | | | | the pointer adjustment code. This is the primary code path that creates totally new instructions in SROA and being able to lump them based on the pointer value's name for which they were created causes *significantly* fewer name collisions and general noise in the debug output. This is particularly significant because it is making it much harder to track down instability in the output of SROA, as name de-duplication is a totally harmless form of instability that gets in the way of seeing real problems. The new fancy naming scheme tries to dig out the root "pre-SROA" name for pointer values and associate that all the way through the pointer formation instructions. Digging out the root is important to prevent the multiple iterative rounds of SROA from just layering too much cruft on top of cruft here. We already track the layers of SROAs iteration in the alloca name prefix. We don't need to duplicate it here. Should have no functionality change, and shouldn't have any really measurable impact on NDEBUG builds, as most of the complex logic is debug-only. llvm-svn: 202139
* [SROA] Rather than copying the logic for building a name prefix into theChandler Carruth2014-02-251-3/+3
| | | | | | | PHI-pointer builder, just copy the builder and clobber the obvious fields. llvm-svn: 202136
* [SROA] Simplify some of the logic to dig out the old pointer value byChandler Carruth2014-02-251-14/+10
| | | | | | | | | using OldPtr more heavily. Lots of this code was written before the rewriter had an OldPtr member setup ahead of time. There are already asserts in place that should ensure this doesn't change any functionality. llvm-svn: 202135
* [SROA] Adjust to new clang-format style.Chandler Carruth2014-02-251-2/+2
| | | | llvm-svn: 202134
* [SROA] Fix a *glaring* bug in r202091: you have to actually *write*Chandler Carruth2014-02-251-0/+2
| | | | | | | | | | | | | | the break statement, not just think it to yourself.... No idea how this worked at all, much less survived most bots, my bootstrap, and some bot bootstraps! The Polly one didn't survive, and this was filed as PR18959. I don't have a reduced test case and honestly I'm not seeing the need. What we probably need here are better asserts / debug-build behavior in SmallPtrSet so that this madness doesn't make it so far. llvm-svn: 202129
* Silence GCC warningAlexey Samsonov2014-02-251-1/+1
| | | | llvm-svn: 202119
* [SROA] Add a debugging tool which shuffles the slices sequence prior toChandler Carruth2014-02-251-0/+19
| | | | | | | | | | | | | sorting it. This helps uncover latent reliance on the original ordering which aren't guaranteed to be preserved by std::sort (but often are), and which are based on the use-def chain orderings which also aren't (technically) guaranteed. Only available in C++11 debug builds, and behind a flag to prevent noise at the moment, but this is generally useful so figured I'd put it in the tree rather than keeping it out-of-tree. llvm-svn: 202106
* [SROA] Use a more direct way of determining whether we are processingChandler Carruth2014-02-251-2/+3
| | | | | | | | | | | | | | | | | | the destination operand or source operand of a memmove. It so happens that it was impossible for SROA to try to rewrite self-memmove where the operands are *identical*, because either such a think is volatile (and we don't rewrite) or it is non-volatile, and we don't even register it as a use of the alloca. However, making the 'IsDest' test *rely* on this subtle fact is... Very confusing for the reader. We should use the direct and readily available test of the Use* which gives us concrete information about which operand is being rewritten. No functionality changed, I hope! ;] llvm-svn: 202103
* [SROA] Fix another instability in SROA with respect to the sliceChandler Carruth2014-02-251-66/+63
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | ordering. The fundamental problem that we're hitting here is that the use-def chain ordering is *itself* not a stable thing to be relying on in the rewriting for SROA. Further, we use a non-stable sort over the slices to arrange them based on the section of the alloca they're operating on. With a debugging STL implementation (or different implementations in stage2 and stage3) this can cause stage2 != stage3. The specific aspect of this problem fixed in this commit deals with the rewriting and load-speculation around PHIs and Selects. This, like many other aspects of the use-rewriting in SROA, is really part of the "strong SSA-formation" that is doen by SROA where it works very hard to canonicalize loads and stores in *just* the right way to satisfy the needs of mem2reg[1]. When we have a select (or a PHI) with 2 uses of the same alloca, we test that loads downstream of the select are speculatable around it twice. If only one of the operands to the select needs to be rewritten, then if we get lucky we rewrite that one first and the select is immediately speculatable. This can cause the order of operand visitation, and thus the order of slices to be rewritten, to change an alloca from promotable to non-promotable and vice versa. The fix is to defer all of the speculation until *after* the rewrite phase is done. Once we've rewritten everything, we can accurately test for whether speculation will work (once, instead of twice!) and the order ceases to matter. This also happens to simplify the other subtlety of speculation -- we need to *not* speculate anything unless the result of speculating will make the alloca fully promotable by mem2reg. I had a previous attempt at simplifying this, but it was still pretty horrible. There is actually already a *really* nice test case for this in basictest.ll, but on multiple STL implementations and inputs, we just got "lucky". Fortunately, the test case is very small and we can essentially build it in exactly the opposite way to get reasonable coverage in both directions even from normal STL implementations. llvm-svn: 202092
* Trivial cleanup: reuse existing variable.Rafael Espindola2014-02-141-2/+1
| | | | | | | | Extracted while trying to understand http://llvm-reviews.chandlerc.com/D1764. Patch by Matt Arsenault. llvm-svn: 201425
* Disable most IR-level transform passes on functions marked 'optnone'.Paul Robinson2014-02-061-0/+3
| | | | | | | | | Ideally only those transform passes that run at -O0 remain enabled, in reality we get as close as we reasonably can. Passes are responsible for disabling themselves, it's not the job of the pass manager to do it for them. llvm-svn: 200892
* [SROA] Fix a bug which could cause the common type finding to returnChandler Carruth2014-01-211-22/+19
| | | | | | | | | | | | | inconsistent results for different orderings of alloca slices. The fundamental issue is that it is just always a mistake to return early from this function. There is no effective early exit to leverage. This patch stops trynig to do so and simplifies the code a bit as a consequence. Original diagnosis and patch by James Molloy with some name tweaks by me in part reflecting feedback from Duncan Smith on the mailing list. llvm-svn: 199771
* Fix a really nasty SROA bug with how we handled out-of-bounds memcpyChandler Carruth2014-01-191-12/+49
| | | | | | | | | | | | | | | | | | | | | | | | | | | intrinsics. Reported on the list by Evan with a couple of attempts to fix, but it took a while to dig down to the root cause. There are two overlapping bugs here, both centering around the circumstance of discovering a memcpy operand which is known to be completely outside the bounds of the alloca. First, we need to kill the *other* side of the memcpy if it was added to this alloca. Otherwise we'll factor it into our slicing and try to rewrite it even though we know for a fact that it is dead. This is made more tricky because we can visit the sides in either order. So we have to both kill the other side and skip instructions marked as dead. The latter really should be goodness in every case, but here is a matter of correctness. Second, we need to actually remove the *uses* of the alloca by the memcpy when queuing it for later deletion. Otherwise it may still be using the alloca when we go to promote it (if the rewrite re-uses the existing alloca instruction). Do this by factoring out the use-clobbering used when for nixing a Phi argument and re-using it across the operands of a to-be-deleted instruction. llvm-svn: 199590
* [PM] Split DominatorTree into a concrete analysis result object whichChandler Carruth2014-01-131-3/+5
| | | | | | | | | | | | | | | | | | | | | | | can be used by both the new pass manager and the old. This removes it from any of the virtual mess of the pass interfaces and lets it derive cleanly from the DominatorTreeBase<> template. In turn, tons of boilerplate interface can be nuked and it turns into a very straightforward extension of the base DominatorTree interface. The old analysis pass is now a simple wrapper. The names and style of this split should match the split between CallGraph and CallGraphWrapperPass. All of the users of DominatorTree have been updated to match using many of the same tricks as with CallGraph. The goal is that the common type remains the resulting DominatorTree rather than the pass. This will make subsequent work toward the new pass manager significantly easier. Also in numerous places things became cleaner because I switched from re-running the pass (!!! mid way through some other passes run!!!) to directly recomputing the domtree. llvm-svn: 199104
* [cleanup] Move the Dominators.h and Verifier.h headers into the IRChandler Carruth2014-01-131-1/+1
| | | | | | | | | | | | | | | | | | directory. These passes are already defined in the IR library, and it doesn't make any sense to have the headers in Analysis. Long term, I think there is going to be a much better way to divide these matters. The dominators code should be fully separated into the abstract graph algorithm and have that put in Support where it becomes obvious that evn Clang's CFGBlock's can use it. Then the verifier can manually construct dominance information from the Support-driven interface while the Analysis library can provide a pass which both caches, reconstructs, and supports a nice update API. But those are very long term, and so I don't want to leave the really confusing structure until that day arrives. llvm-svn: 199082
* Add missed cleanup from r198456Alp Toker2014-01-041-4/+6
| | | | | | | All other uses of this macro in LLVM/clang have been moved to the function definition so follow suite (and the usage advice) here too for consistency. llvm-svn: 198516
* Add a LLVM_DUMP_METHOD macro.Nico Weber2014-01-031-2/+2
| | | | | | | | | | | | | | The motivation is to mark dump methods as used in debug builds so that they can be called from lldb, but to not do so in release builds so that they can be dead-stripped. There's lots of potential follow-up work suggested in the thread "Should dump methods be LLVM_ATTRIBUTE_USED only in debug builds?" on cfe-dev, but everyone seems to agreen on this subset. Macro name chosen by fair coin toss. llvm-svn: 198456
* Fix an issue where SROA computed different results based on the relativeChandler Carruth2013-11-191-10/+19
| | | | | | | | | | | | | | | | | | | | | | | | | | order of slices of the alloca which have exactly the same size and other properties. This was found by a perniciously unstable sort implementation used to flush out buggy uses of the algorithm. The fundamental idea is that findCommonType should return the best common type it can find across all of the slices in the range. There were two bugs here previously: 1) We would accept an integer type smaller than a byte-width multiple, and if there were different bit-width integer types, we would accept the first one. This caused an actual failure in the testcase updated here when the sort order changed. 2) If we found a bad combination of types or a non-load, non-store use before an integer typed load or store we would bail, but if we found the integere typed load or store, we would use it. The correct behavior is to always use an integer typed operation which covers the partition if one exists. While a clever debugging sort algorithm found problem #1 in our existing test cases, I have no useful test case ideas for #2. I spotted in by inspection when looking at this code. llvm-svn: 195118
* Drop spurious handle in comment.Benjamin Kramer2013-09-221-1/+1
| | | | llvm-svn: 191172
* SROA: Handle casts involving vectors of pointers and integer scalars.Benjamin Kramer2013-09-211-11/+47
| | | | | | | | | | SROA wants to convert any types of equivalent widths but it's not possible to convert vectors of pointers to an integer scalar with a single cast. As a workaround we add a bitcast to the corresponding int ptr type first. This type of cast used to be an edge case but has become common with SLP vectorization. Fixes PR17271. llvm-svn: 191143
* Revert r187191, which broke opt -mem2reg on the testcases included in PR16867.Nick Lewycky2013-08-131-82/+13
| | | | | | | | | | | | However, opt -O2 doesn't run mem2reg directly so nobody noticed until r188146 when SROA started sending more things directly down the PromoteMemToReg path. In order to revert r187191, I also revert dependent revisions r187296, r187322 and r188146. Fixes PR16867. Does not add the testcases from that PR, but both of them should get added for both mem2reg and sroa when this revert gets unreverted. llvm-svn: 188327
OpenPOWER on IntegriCloud