summaryrefslogtreecommitdiffstats
path: root/llvm/lib/Transforms/InstCombine
Commit message (Collapse)AuthorAgeFilesLines
* Shrinkify libcalls: use float versions of double libm functions with ↵Sanjay Patel2014-10-221-7/+1
| | | | | | | | | | | | | | | | | | | | | | | | fast-math (bug 17850) When a call to a double-precision libm function has fast-math semantics (via function attribute for now because there is no IR-level FMF on calls), we can avoid fpext/fptrunc operations and use the float version of the call if the input and output are both float. We already do this optimization using a command-line option; this patch just adds the ability for fast-math to use the existing functionality. I moved the cl::opt from InstructionCombining into SimplifyLibCalls because it's only ever used internally to that class. Modified the existing test cases to use the unsafe-fp-math attribute rather than repeating all tests. This patch should solve: http://llvm.org/bugs/show_bug.cgi?id=17850 Differential Revision: http://reviews.llvm.org/D5893 llvm-svn: 220390
* Revert "Teach the load analysis to allow finding available values which ↵Hans Wennborg2014-10-211-2/+1
| | | | | | | | require" (r220277) This seems to have caused PR21330. llvm-svn: 220349
* Add minnum / maxnum intrinsicsMatt Arsenault2014-10-211-0/+84
| | | | | | | | | | | | These are named following the IEEE-754 names for these functions, rather than the libm fmin / fmax to avoid possible ambiguities. Some languages may implement something resembling fmin / fmax which return NaN if either operand is to propagate errors. These implement the IEEE-754 semantics of returning the other operand if either is a NaN representing missing data. llvm-svn: 220341
* Preserve 'nonnull' when changing type of the load.Philip Reames2014-10-211-0/+1
| | | | | | | | When changing the type of a load in Chandler's recent InstCombine changes, we can preserve the new 'nonnull' metadata. I considered adding an assert since 'nonnull' is only valid on pointer types, but casting a pointer to a non-pointer would involve more than a bitcast anyways. If someone extends this transform to handle more than bitcasts, the verifier will report the malformed IR, so a separate assertion isn't needed. Also, the fpmath flags would have the same problem. llvm-svn: 220324
* InstCombine: Simplify FoldICmpCstShrCstDavid Majnemer2014-10-211-48/+16
| | | | | | | | | This function was complicated by the fact that it tried to perform canonicalizations that were already preformed by InstSimplify. Remove this extra code and move the tests over to InstSimplify. Add asserts to make sure our preconditions hold before we make any assumptions. llvm-svn: 220314
* Teach the load analysis to allow finding available values which requireChandler Carruth2014-10-211-1/+2
| | | | | | | | | | | | | | | | | | | | inttoptr or ptrtoint cast provided there is datalayout available. Eventually, the datalayout can just be required but in practice it will always be there today. To go with the ability to expose available values requiring a ptrtoint or inttoptr cast, helpers are added to perform one of these three casts. These smarts are necessary to finish canonicalizing loads and stores to the operational type requirements without regressing fundamental combines. I've added some test cases. These should actually improve as the load combining and store combining improves, but they may fundamentally be highlighting some missing combines for select in addition to exercising the specific added logic to load analysis. llvm-svn: 220277
* Introduce enum values for previously defined metadata types. (NFC)Philip Reames2014-10-211-6/+2
| | | | | | | | | | | Our metadata scheme lazily assigns IDs to string metadata, but we have a mechanism to preassign them as well. Using a preassigned ID is helpful since we get compile time type checking, and avoid some (minimal) string construction and comparison. This change adds enum value for three existing metadata types: + MD_nontemporal = 9, // "nontemporal" + MD_mem_parallel_loop_access = 10, // "llvm.mem.parallel_loop_access" + MD_nonnull = 11 // "nonnull" I went through an updated various uses as well. I made no attempt to get all uses; I focused on the ones which were easily grepable and easily to translate. For example, there were several items in LoopInfo.cpp I chose not to update. llvm-svn: 220248
* Teach the load analysis driving core instcombine logic and other bits ofChandler Carruth2014-10-201-1/+2
| | | | | | | | | | | | | | | | | | | logic to look through pointer casts, making them trivially stronger in the face of loads and stores with intervening pointer casts. I've included a few test cases that demonstrate the kind of folding instcombine can do without pointer casts and then variations which obfuscate the logic through bitcasts. Without this patch, the variations all fail to optimize fully. This is more important now than it has been in the past as I've started moving the load canonicialization to more closely follow the value type requirements rather than the pointer type requirements and thus this needs to be prepared for more pointer casts. When I made the same change to stores several test cases regressed without logic along these lines so I wanted to systematically improve matters first. llvm-svn: 220178
* Do a better and more complete job of preserving metadata when combiningChandler Carruth2014-10-191-8/+58
| | | | | | | | | | | | | | | | | | | | loads. This handles many more cases than just the AA metadata, some of them suggested by Hal in his review of the AA metadata handling patch. I've tried to test this behavior where tractable to do so. I'll point out that I have specifically *not* included a test for debuginfo because it was going to require 2 or 3 times as much work to craft some input which would survive the "helpful" stripping of debug info metadata that doesn't match the desired schema. This is another good example of why the current state of write-ability for our debug info metadata is unacceptable. I spent over 30 minutes trying to conjure some test case that would survive, even copying from other debug info tests, but it always failed to survive with no explanation of why or how I might fix it. =[ llvm-svn: 220165
* InstCombine: (sub (or A B) (xor A B)) --> (and A B)David Majnemer2014-10-191-0/+9
| | | | | | | | | | | The following implements the transformation: (sub (or A B) (xor A B)) --> (and A B). Patch by Ankur Garg! Differential Revision: http://reviews.llvm.org/D5719 llvm-svn: 220163
* InstCombine: Optimize icmp eq/ne (shl Const2, A), Const1David Majnemer2014-10-192-1/+50
| | | | | | | | | | | | | | | The following implements the optimization for sequences of the form: icmp eq/ne (shl Const2, A), Const1 Such sequences can be transformed to: icmp eq/ne A, (TrailingZeros(Const1) - TrailingZeros(Const2)) This handles only the equality operators for now. Other operators need to be handled. Patch by Ankur Garg! llvm-svn: 220162
* Preserve AA metadata when combining (cast (load (...))) -> (load (castChandler Carruth2014-10-181-0/+3
| | | | | | (...))). llvm-svn: 220141
* [InstCombine] Do an about-face on how LLVM canonicalizes (cast (loadChandler Carruth2014-10-181-72/+43
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | ...)) and (load (cast ...)): canonicalize toward the former. Historically, we've tried to load using the type of the *pointer*, and tried to match that type as closely as possible removing as many pointer casts as we could and trading them for bitcasts of the loaded value. This is deeply and fundamentally wrong. Repeat after me: memory does not have a type! This was a hard lesson for me to learn working on SROA. There is only one thing that should actually drive the type used for a pointer, and that is the type which we need to use to load from that pointer. Matching up pointer types to the loaded value types is very useful because it minimizes the physical size of the IR required for no-op casts. Similarly, the only thing that should drive the type used for a loaded value is *how that value is used*! Again, this minimizes casts. And in fact, the *only* thing motivating types in any part of LLVM's IR are the types used by the operations in the IR. We should match them as closely as possible. I've ended up removing some tests here as they were testing bugs or behavior that is no longer present. Mostly though, this is just cleanup to let the tests continue to function as intended. The only fallout I've found so far from this change was SROA and I have fixed it to not be impeded by the different type of load. If you find more places where this change causes optimizations not to fire, those too are likely bugs where we are assuming that the type of pointers is "significant" for optimization purposes. llvm-svn: 220138
* Reapply r219832 - InstCombine: Narrow switch instructions using known bits.Akira Hatanaka2014-10-161-0/+31
| | | | | | | The code committed in r219832 asserted when it attempted to shrink a switch statement whose type was larger than 64-bit. llvm-svn: 219902
* Revert r219832.Akira Hatanaka2014-10-161-31/+0
| | | | llvm-svn: 219884
* InstCombine: Narrow switch instructions using known bits.Akira Hatanaka2014-10-151-0/+31
| | | | | | | | | Truncate the operands of a switch instruction to a narrower type if the upper bits are known to be all ones or zeros. rdar://problem/17720004 llvm-svn: 219832
* InstCombine: Don't miscompile X % ((Pow2 << A) >>u B)David Majnemer2014-10-141-7/+4
| | | | | | | | | | | | | | | | | | | We assumed that A must be greater than B because the right hand side of a remainder operator must be nonzero. However, it is possible for A to be less than B if Pow2 is a power of two greater than 1. Take for example: i32 %A = 0 i32 %B = 31 i32 Pow2 = 2147483648 ((Pow2 << 0) >>u 31) is non-zero but A is less than B. This fixes PR21274. llvm-svn: 219713
* fix formatting; NFCSanjay Patel2014-10-141-33/+25
| | | | llvm-svn: 219645
* InstCombine: Fix miscompile in X % -Y -> X % Y transformDavid Majnemer2014-10-131-6/+6
| | | | | | | | | | | We assumed that negation operations of the form (0 - %Z) resulted in a negative number. This isn't true if %Z was originally negative. Substituting the negative number into the remainder operation may result in undefined behavior because the dividend might be INT_MIN. This fixes PR21256. llvm-svn: 219639
* InstCombine: Don't miscompile (x lshr C1) udiv C2David Majnemer2014-10-131-4/+10
| | | | | | | | | | | | | We have a transform that changes: (x lshr C1) udiv C2 into: x udiv (C2 << C1) However, it is unsafe to do so if C2 << C1 discards any of C2's bits. This fixes PR21255. llvm-svn: 219634
* InstCombine: Turn (x != 0 & x <u C) into the canonical range check form (x-1 ↵Benjamin Kramer2014-10-121-0/+2
| | | | | | <u C-1) llvm-svn: 219585
* InstCombine: Simplify commonIDivTransformsDavid Majnemer2014-10-121-86/+76
| | | | | | | | | | A helper routine, MultiplyOverflows, was a less efficient reimplementation of APInt's smul_ov and umul_ov. While we are here, clean up the code so it's more uniform. No functionality change intended. llvm-svn: 219583
* InstCombine: Don't fold (X <<s log(INT_MIN)) /s INT_MIN to XDavid Majnemer2014-10-111-1/+2
| | | | | | | | | | Consider the case where X is 2. (2 <<s 31)/s-2147483648 is zero but we would fold to X. Note that this is valid when we are in the unsigned domain because we require NUW: 2 <<u 31 results in poison. This fixes PR21245. llvm-svn: 219568
* InstCombine, InstSimplify: (%X /s C1) /s C2 isn't always 0 when C1 * C2 overflowDavid Majnemer2014-10-111-5/+4
| | | | | | | | | | | | | | | | | | consider: C1 = INT_MIN C2 = -1 C1 * C2 overflows without a doubt but consider the following: %x = i32 INT_MIN This means that (%X /s C1) is 1 and (%X /s C1) /s C2 is -1. N. B. Move the unsigned version of this transform to InstSimplify, it doesn't create any new instructions. This fixes PR21243. llvm-svn: 219567
* InstCombine: mul to shl shouldn't preserve nswDavid Majnemer2014-10-111-2/+0
| | | | | | | | | | | | | | | | | consider: mul i32 nsw %x, -2147483648 this instruction will not result in poison if %x is 1 however, if we transform this into: shl i32 nsw %x, 31 then we will be generating poison because we just shifted into the sign bit. This fixes PR21242. llvm-svn: 219566
* [InstCombine] Fix wrong folding of constant comparisons involving ashr and ↵Andrea Di Biagio2014-10-091-1/+2
| | | | | | | | | | | | | | negative values. This patch fixes a bug in method InstCombiner::FoldCmpCstShrCst where we wrongly computed the distance between the highest bits set of two negative values. This fixes PR21222. Differential Revision: http://reviews.llvm.org/D5700 llvm-svn: 219406
* Revert "[InstCombine] re-commit r218721 with fix for pr21199"Justin Bogner2014-10-083-147/+8
| | | | | | | | | | | | | This seems to cause a miscompile when building clang, which causes a bootstrapped clang to fail or crash in several of its tests. See: http://lab.llvm.org:8013/builders/clang-x86_64-darwin11-RA/builds/1184 http://bb.pgr.jp/builders/clang-3stage-x86_64-linux/builds/7813 This reverts commit r219282. llvm-svn: 219317
* Format spacing and remove extra lines to comply with standards. NFC.Suyog Sarda2014-10-081-5/+6
| | | | | | | Differential Revision: http://reviews.llvm.org/D5649 llvm-svn: 219286
* [InstCombine] re-commit r218721 with fix for pr21199Gerolf Hoflehner2014-10-083-8/+147
| | | | | | | | The icmp-select-icmp optimization targets select-icmp.eq only. This is now ensured by testing the branch predicate explictly. This commit also includes the test case for pr21199. llvm-svn: 219282
* Revert r219175 - [InstCombine] re-commit r218721 icmp-select-icmp optimizationHans Wennborg2014-10-083-151/+8
| | | | | | This seems to have caused PR21199. llvm-svn: 219264
* Reformat if statement to comply with LLVM standards. NFC.Suyog Sarda2014-10-071-2/+4
| | | | | | Differential Revision: http://reviews.llvm.org/D5644 llvm-svn: 219203
* Reformat to comply with LLVM coding standards using clang-format.Suyog Sarda2014-10-071-5/+4
| | | | | | | | NFC. Differential Revision: http://reviews.llvm.org/D5645 llvm-svn: 219202
* [InstCombine] Reformat if statements to comply with LLVM Coding Standards.Tilmann Scheller2014-10-071-2/+6
| | | | | | | | Patch by Sonam Kumari! Differential Revision: http://reviews.llvm.org/D5643 llvm-svn: 219198
* [InstCombine] re-commit r218721 icmp-select-icmp optimizationGerolf Hoflehner2014-10-073-8/+151
| | | | | | | | | Takes care of the assert that caused build fails. Rather than asserting the code checks now that the definition and use are in the same block, and does not attempt to optimize when that is not the case. llvm-svn: 219175
* [InstCombine] Simplify the logic from r219067 using ValueTrackingHal Finkel2014-10-051-12/+4
| | | | | | | | | | | | Joerg suggested on IRC that I look at generalizing the logic from r219067 to handle more general redundancies (like removing an assume(x > 3) dominated by an assume(x > 5)). The way to do this would be to ask ValueTracking to determine the value of the i1 argument. It turns out that ValueTracking is not very good at this right now (although it does get the trivial redundancy case) because it does not understand ICmps. Nevertheless, the resulting code in InstCombine is simpler than r219067, so we might as well do it now. llvm-svn: 219070
* [InstCombine] Remove redundant @llvm.assume intrinsicsHal Finkel2014-10-041-0/+17
| | | | | | | | | For any @llvm.assume intrinsic, if there is another which dominates it and uses the same condition, then it is redundant and can be removed. While this does not alter the semantics of the @llvm.assume intrinsics, it makes subsequent handling more efficient (and the resulting IR easier to read). llvm-svn: 219067
* Optimize square root squared (PR21126).Sanjay Patel2014-10-021-0/+5
| | | | | | | | | | | When unsafe-fp-math is enabled, we can turn sqrt(X) * sqrt(X) into X. This can happen in the real world when calculating x ** 3/2. This occurs in test-suite/SingleSource/Benchmarks/BenchmarkGame/n-body.c. Differential Revision: http://reviews.llvm.org/D5584 llvm-svn: 218906
* Use the local variable that other clauses around here are already using.Sanjay Patel2014-10-021-1/+1
| | | | llvm-svn: 218876
* Revert r218721, r218735.Evgeniy Stepanov2014-10-013-155/+8
| | | | | | | | | | Failing bootstrap on Linux (arm, x86). http://lab.llvm.org:8011/builders/sanitizer-x86_64-linux/builds/13139/steps/bootstrap%20clang/logs/stdio http://lab.llvm.org:8011/builders/clang-cmake-armv7-a15-selfhost/builds/470 http://lab.llvm.org:8011/builders/clang-native-arm-lnt/builds/8518 llvm-svn: 218752
* [InstCombine] Fix for assert build failures caused by r218721Gerolf Hoflehner2014-10-011-1/+7
| | | | | | | | | The icmp-select-icmp optimization made the implicit assumption that the select-icmp instructions are in the same block and asserted on it. The fix explicitly checks for that condition and conservatively suppresses the optimization when it is violated. llvm-svn: 218735
* [InstCombine] Optimize icmp-select-icmpGerolf Hoflehner2014-10-013-8/+149
| | | | | | | | | | | | | | | | | | | | | | | | | | | In special cases select instructions can be eliminated by replacing them with a cheaper bitwise operation even when the select result is used outside its home block. The instances implemented are patterns like %x=icmp.eq %y=select %x,%r, null %z=icmp.eq|neq %y, null br %z,true, false ==> %x=icmp.ne %y=icmp.eq %r,null %z=or %x,%y br %z,true,false The optimization is integrated into the instruction combiner and performed only when all uses of the select result can be replaced by the select operand proper. For this dominator information is used and dominance is now a required analysis pass in the combiner. The optimization itself is iterative. The critical step is to replace the select result with the non-constant select operand. So the select becomes local and the combiner iteratively works out simpler code pattern and eventually eliminates the select. rdar://17853760 llvm-svn: 218721
* Reapply fix in r217988 (reverted in r217989) and remove the alternative fix ↵David Blaikie2014-09-171-1/+1
| | | | | | | | | | committed in r217987. This type isn't owned polymorphically (as demonstrated by making the dtor protected and everything still compiling) so just address the warning by protecting the base dtor and making the derived class final. llvm-svn: 217990
* Revert "Fix -Wnon-virtual-dtor warning introduced in r217982."David Blaikie2014-09-171-1/+1
| | | | | | | | An alternative fix was already committed. This reverts commit r217988. llvm-svn: 217989
* Fix -Wnon-virtual-dtor warning introduced in r217982.David Blaikie2014-09-171-1/+1
| | | | llvm-svn: 217988
* Refactoring SimplifyLibCalls to remove static initializers and generally ↵Chris Bieneman2014-09-171-5/+6
| | | | | | | | | | | | | | | | cleaning up the code. Summary: This eliminates ~200 lines of code mostly file scoped struct definitions that were unnecessary. Reviewers: chandlerc, resistor Reviewed By: resistor Subscribers: morisset, resistor, llvm-commits Differential Revision: http://reviews.llvm.org/D5364 llvm-svn: 217982
* [InstCombine] Fix wrong folding of constant comparison involving ahsr and ↵Andrea Di Biagio2014-09-171-9/+13
| | | | | | | | | | | | | | | | | | | | | | | | | | negative quantities (PR20945). Example: define i1 @foo(i32 %a) { %shr = ashr i32 -9, %a %cmp = icmp ne i32 %shr, -5 ret i1 %cmp } Before this fix, the instruction combiner wrongly thought that %shr could have never been equal to -5. Therefore, %cmp was always folded to 'true'. However, when %a is equal to 1, then %cmp evaluates to 'false'. Therefore, in this example, it is not valid to fold %cmp to 'true'. The problem was only affecting the case where the comparison was between negative quantities where one of the quantities was obtained from arithmetic shift of a negative constant. This patch fixes the problem with the wrong folding (fixes PR20945). With this patch, the 'icmp' from the example is now simplified to a comparison between %a and 1. This still allows us to get rid of the arithmetic shift (%shr). llvm-svn: 217950
* Check for all known bits on ret in InstCombineHal Finkel2014-09-072-0/+19
| | | | | | | | | | From a combination of @llvm.assume calls (and perhaps through other means, such as range metadata), it is possible that all bits of a return value might be known. Previously, InstCombine did not check for this (which is understandable given assumptions of constant propagation), but means that we'd miss simple cases where assumptions are involved. llvm-svn: 217346
* Add additional patterns for @llvm.assume in ValueTrackingHal Finkel2014-09-071-0/+24
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This builds on r217342, which added the infrastructure to compute known bits using assumptions (@llvm.assume calls). That original commit added only a few patterns (to catch common cases related to determining pointer alignment); this change adds several other patterns for simple cases. r217342 contained that, for assume(v & b = a), bits in the mask that are known to be one, we can propagate known bits from the a to v. It also had a known-bits transfer for assume(a = b). This patch adds: assume(~(v & b) = a) : For those bits in the mask that are known to be one, we can propagate inverted known bits from the a to v. assume(v | b = a) : For those bits in b that are known to be zero, we can propagate known bits from the a to v. assume(~(v | b) = a): For those bits in b that are known to be zero, we can propagate inverted known bits from the a to v. assume(v ^ b = a) : For those bits in b that are known to be zero, we can propagate known bits from the a to v. For those bits in b that are known to be one, we can propagate inverted known bits from the a to v. assume(~(v ^ b) = a) : For those bits in b that are known to be zero, we can propagate inverted known bits from the a to v. For those bits in b that are known to be one, we can propagate known bits from the a to v. assume(v << c = a) : For those bits in a that are known, we can propagate them to known bits in v shifted to the right by c. assume(~(v << c) = a) : For those bits in a that are known, we can propagate them inverted to known bits in v shifted to the right by c. assume(v >> c = a) : For those bits in a that are known, we can propagate them to known bits in v shifted to the right by c. assume(~(v >> c) = a) : For those bits in a that are known, we can propagate them inverted to known bits in v shifted to the right by c. assume(v >=_s c) where c is non-negative: The sign bit of v is zero assume(v >_s c) where c is at least -1: The sign bit of v is zero assume(v <=_s c) where c is negative: The sign bit of v is one assume(v <_s c) where c is non-positive: The sign bit of v is one assume(v <=_u c): Transfer the known high zero bits assume(v <_u c): Transfer the known high zero bits (if c is know to be a power of 2, transfer one more) A small addition to InstCombine was necessary for some of the test cases. The problem is that when InstCombine was simplifying and, or, etc. it would fail to check the 'do I know all of the bits' condition before checking less specific conditions and would not fully constant-fold the result. I'm not sure how to trigger this aside from using assumptions, so I've just included the change here. llvm-svn: 217343
* Make use of @llvm.assume in ValueTracking (computeKnownBits, etc.)Hal Finkel2014-09-0713-188/+265
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This change, which allows @llvm.assume to be used from within computeKnownBits (and other associated functions in ValueTracking), adds some (optional) parameters to computeKnownBits and friends. These functions now (optionally) take a "context" instruction pointer, an AssumptionTracker pointer, and also a DomTree pointer, and most of the changes are just to pass this new information when it is easily available from InstSimplify, InstCombine, etc. As explained below, the significant conceptual change is that known properties of a value might depend on the control-flow location of the use (because we care that the @llvm.assume dominates the use because assumptions have control-flow dependencies). This means that, when we ask if bits are known in a value, we might get different answers for different uses. The significant changes are all in ValueTracking. Two main changes: First, as with the rest of the code, new parameters need to be passed around. To make this easier, I grouped them into a structure, and I made internal static versions of the relevant functions that take this structure as a parameter. The new code does as you might expect, it looks for @llvm.assume calls that make use of the value we're trying to learn something about (often indirectly), attempts to pattern match that expression, and uses the result if successful. By making use of the AssumptionTracker, the process of finding @llvm.assume calls is not expensive. Part of the structure being passed around inside ValueTracking is a set of already-considered @llvm.assume calls. This is to prevent a query using, for example, the assume(a == b), to recurse on itself. The context and DT params are used to find applicable assumptions. An assumption needs to dominate the context instruction, or come after it deterministically. In this latter case we only handle the specific case where both the assumption and the context instruction are in the same block, and we need to exclude assumptions from being used to simplify their own ephemeral values (those which contribute only to the assumption) because otherwise the assumption would prove its feeding comparison trivial and would be removed. This commit adds the plumbing and the logic for a simple masked-bit propagation (just enough to write a regression test). Future commits add more patterns (and, correspondingly, more regression tests). llvm-svn: 217342
* Add an Assumption-Tracking PassHal Finkel2014-09-073-4/+23
| | | | | | | | | | | | | | | | | | | | | | | | This adds an immutable pass, AssumptionTracker, which keeps a cache of @llvm.assume call instructions within a module. It uses callback value handles to keep stale functions and intrinsics out of the map, and it relies on any code that creates new @llvm.assume calls to notify it of the new instructions. The benefit is that code needing to find @llvm.assume intrinsics can do so directly, without scanning the function, thus allowing the cost of @llvm.assume handling to be negligible when none are present. The current design is intended to be lightweight. We don't keep track of anything until we need a list of assumptions in some function. The first time this happens, we scan the function. After that, we add/remove @llvm.assume calls from the cache in response to registration calls and ValueHandle callbacks. There are no new direct test cases for this pass, but because it calls it validation function upon module finalization, we'll pick up detectable inconsistencies from the other tests that touch @llvm.assume calls. This pass will be used by follow-up commits that make use of @llvm.assume. llvm-svn: 217334
OpenPOWER on IntegriCloud