summaryrefslogtreecommitdiffstats
path: root/llvm/lib/Analysis
Commit message (Collapse)AuthorAgeFilesLines
* [NFC] Rename isKnownViaSimpleReasoning to isKnownViaNonRecursiveReasoningMax Kazantsev2018-02-151-15/+15
| | | | llvm-svn: 325216
* [SCEV] Favor isKnownViaSimpleReasoning over constant ranges checkMax Kazantsev2018-02-151-6/+6
| | | | | | | | | | | | | | | | | | | | | There is a more powerful but still simple function `isKnownViaSimpleReasoning ` that does constant range check and few more additional checks. We use it some places (e.g. when proving implications) and in some other places we only check constant ranges. Currently, indvar simplifier fails to remove the check in following loop: int inc = ...; for (int i = inc, j = inc - 1; i < 200; ++i, ++j) if (i > j) { ... } This patch replaces all usages of `isKnownPredicateViaConstantRanges` with `isKnownViaSimpleReasoning` to have smarter proofs. In particular, it fixes the case above. Reviewed-By: sanjoy Differential Revision: https://reviews.llvm.org/D43175 llvm-svn: 325214
* Adding a width of the GEP index to the Data Layout.Elena Demikhovsky2018-02-147-39/+52
| | | | | | | | | | | | | | | | | | Making a width of GEP Index, which is used for address calculation, to be one of the pointer properties in the Data Layout. p[address space]:size:memory_size:alignment:pref_alignment:index_size_in_bits. The index size parameter is optional, if not specified, it is equal to the pointer size. Till now, the InstCombiner normalized GEPs and extended the Index operand to the pointer width. It works fine if you can convert pointer to integer for address calculation and all registered targets do this. But some ISAs have very restricted instruction set for the pointer calculation. During discussions were desided to retrieve information for GEP index from the Data Layout. http://lists.llvm.org/pipermail/llvm-dev/2018-January/120416.html I added an interface to the Data Layout and I changed the InstCombiner and some other passes to take the Index width into account. This change does not affect any in-tree target. I added tests to cover data layouts with explicitly specified index size. Differential Revision: https://reviews.llvm.org/D42123 llvm-svn: 325102
* [InstSimplify] allow exp/log simplifications with only 'reassoc' FMFSanjay Patel2018-02-121-4/+4
| | | | | | | | | | These intrinsic folds were added with D41381, but only allowed with isFast(). That's more than necessary because FMF has 'reassoc' to apply to these kinds of folds after D39304, and that's all we need in these cases. Differential Revision: https://reviews.llvm.org/D43160 llvm-svn: 324967
* [SCEV] Make getPostIncExpr guaranteed to return AddRecMax Kazantsev2018-02-121-0/+25
| | | | | | | | | | | | | The current implementation of `getPostIncExpr` invokes `getAddExpr` for two recurrencies and expects that it always returns it a recurrency. But this is not guaranteed to happen if we have reached max recursion depth or refused to make SCEV simplification for other reasons. This patch changes its implementation so that now it always returns SCEVAddRec without relying on `getAddExpr`. Differential Revision: https://reviews.llvm.org/D42953 llvm-svn: 324866
* [ValueTracking] don't crash when assumptions conflict (PR36270)Sanjay Patel2018-02-081-0/+8
| | | | | | | | | | | | | | The last assume in the test says that %B12 is 0. The first assume says that %and1 is less than %B12. Therefore, %and1 is unsigned less than 0...does not compute. That means this line: Known.Zero.setHighBits(RHSKnown.countMinLeadingZeros() + 1); ...tries to set more bits than exist. Differential Revision: https://reviews.llvm.org/D43052 llvm-svn: 324610
* Re-enable "[SCEV] Make isLoopEntryGuardedByCond a bit smarter"Max Kazantsev2018-02-071-5/+57
| | | | | | | | | The failures happened because of assert which was overconfident about SCEV's proving capabilities and is generally not valid. Differential Revision: https://reviews.llvm.org/D42835 llvm-svn: 324473
* Revert [SCEV] Make isLoopEntryGuardedByCond a bit smarterSerguei Katkov2018-02-071-57/+5
| | | | | | | | Revert rL324453 commit which causes buildbot failures. Differential Revision: https://reviews.llvm.org/D42835 llvm-svn: 324462
* [SCEV] Make isLoopEntryGuardedByCond a bit smarterMax Kazantsev2018-02-071-5/+57
| | | | | | | | | | | Sometimes `isLoopEntryGuardedByCond` cannot prove predicate `a > b` directly. But it is a common situation when `a >= b` is known from ranges and `a != b` is known from a dominating condition. Thia patch teaches SCEV to sum these facts together and prove strict comparison via non-strict one. Differential Revision: https://reviews.llvm.org/D42835 llvm-svn: 324453
* Follow-up for r324429: "[LCSSAVerification] Run verification only when ↵Michael Zolotukhin2018-02-071-1/+5
| | | | | | | | | | | | | asserts are enabled." Before r324429 we essentially didn't have a verification of LCSSA, so no wonder that it has been broken: currently loop-sink breaks it (the attached test illustrates the failure). It was detected during a stage2 RA build, so to unbreak it I'm disabling the check for now. llvm-svn: 324445
* [LCSSAVerification] Run verification only when asserts are enabled.Michael Zolotukhin2018-02-071-1/+3
| | | | llvm-svn: 324429
* [InstCombine][ValueTracking] Match non-uniform constant power-of-two vectorsSimon Pilgrim2018-02-061-8/+5
| | | | | | | | Generalize existing constant matching to work with non-uniform constant vectors as well. Differential Revision: https://reviews.llvm.org/D42818 llvm-svn: 324369
* [LoopStrengthReduce, x86] don't add cost for a cmp that will be macro-fused ↵Sanjay Patel2018-02-051-0/+4
| | | | | | | | | | | | | | | (PR35681) In the motivating case from PR35681 and represented by the macro-fuse-cmp test: https://bugs.llvm.org/show_bug.cgi?id=35681 ...there's a 37 -> 31 byte size win for the loop because we eliminate the big base address offsets. SPEC2017 on Ryzen shows no significant perf difference. Differential Revision: https://reviews.llvm.org/D42607 llvm-svn: 324289
* Re-apply [SCEV] Fix isLoopEntryGuardedByCond usageSerguei Katkov2018-02-051-3/+12
| | | | | | | | | | | | | | | | | ScalarEvolution::isKnownPredicate invokes isLoopEntryGuardedByCond without check that SCEV is available at entry point of the loop. It is incorrect and fixed by patch. To bugs additionally fixed: assert is moved after the check whether loop is not a nullptr. Usage of isLoopEntryGuardedByCond in ScalarEvolution::isImpliedCondOperandsViaNoOverflow is guarded by isAvailableAtLoopEntry. Reviewers: sanjoy, mkazantsev, anna, dorit, reames Reviewed By: mkazantsev Subscribers: llvm-commits Differential Revision: https://reviews.llvm.org/D42417 llvm-svn: 324204
* [Analysis] Support aggregate access types in TBAAIvan A. Kosarev2018-02-021-96/+217
| | | | | | | | | This patch implements analysis for new-format TBAA access tags with aggregate types as their final access types. Differential Revision: https://reviews.llvm.org/D41501 llvm-svn: 324092
* Remove CallGraphTraits and use equivalent methods in GraphTraitsEaswaran Raman2018-02-011-2/+2
| | | | | | | | | | | | | | | | Summary: D42698 adds child_edge_{begin|end} and children_edges to GraphTraits which are used here. The reason for this change is to make it easy to use count propagation on ModulesummaryIndex. As it stands, CallGraphTraits is in Analysis while ModuleSummaryIndex is in IR. Reviewers: davidxl, dberlin Subscribers: llvm-commits Differential Revision: https://reviews.llvm.org/D42703 llvm-svn: 323994
* [Analysis] Disable calls to *_finite and other glibc-only functions on Android.Chih-Hung Hsieh2018-01-311-12/+6
| | | | | | | | | | Since r322087, glibc's finite lib calls are generated when possible. However, they are not supported on Android. This change also disables other functions not available on Android. Differential Revision: http://reviews.llvm.org/D42668 llvm-svn: 323898
* [Lint] Upgrade uses of MemoryIntrinic::getAlignment() to new API. (NFCI)Daniel Neilson2018-01-311-5/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: This change is part of step five in the series of changes to remove alignment argument from memcpy/memmove/memset in favour of alignment attributes. In particular, this changes the Lint analysis to cease using the old getAlignment() API of MemoryIntrinsic in favour of getting source & dest specific alignments through the new API. Steps: Step 1) Remove alignment parameter and create alignment parameter attributes for memcpy/memmove/memset. ( rL322965, rC322964, rL322963 ) Step 2) Expand the IRBuilder API to allow creation of memcpy/memmove with differing source and dest alignments. ( rL323597 ) Step 3) Update Clang to use the new IRBuilder API. ( rC323617 ) Step 4) Update Polly to use the new IRBuilder API. ( rL323618 ) Step 5) Update LLVM passes that create memcpy/memmove calls to use the new IRBuilder API, and those that use use MemIntrinsicInst::[get|set]Alignment() to use [get|set]DestAlignment() and [get|set]SourceAlignment() instead. Step 6) Remove the single-alignment IRBuilder API for memcpy/memmove, and the MemIntrinsicInst::[get|set]Alignment() methods. Reference http://lists.llvm.org/pipermail/llvm-dev/2015-August/089384.html http://lists.llvm.org/pipermail/llvm-commits/Week-of-Mon-20151109/312083.html llvm-svn: 323886
* Re-commit : [PowerPC] Add handling for ColdCC calling convention and a pass ↵Zaara Syeda2018-01-301-0/+4
| | | | | | | | | | | | | | | | | | | | | to mark candidates with coldcc attribute. This recommits r322721 reverted due to sanitizer memory leak build bot failures. Original commit message: This patch adds support for the coldcc calling convention for Power. This changes the set of non-volatile registers. It includes a pass to stress test the implementation by marking all static directly called functions with the coldcc attribute through the option -enable-coldcc-stress-test. It also includes an option, -ppc-enable-coldcc, to add the coldcc attribute to functions which are cold at all call sites based on BlockFrequencyInfo when the containing function does not call any non cold functions. Differential Revision: https://reviews.llvm.org/D38413 llvm-svn: 323778
* [InstSimplify] (X * Y) / Y --> X for relaxed floating-point opsSanjay Patel2018-01-301-0/+6
| | | | | | | | | This is the FP counterpart that was mentioned in PR35709: https://bugs.llvm.org/show_bug.cgi?id=35709 Differential Revision: https://reviews.llvm.org/D42385 llvm-svn: 323716
* [InlineCost] Mark functions accessing varargs as not viable.Florian Hahn2018-01-281-6/+12
| | | | | | | | | | | | | This prevents functions accessing varargs from being inlined if they have the alwaysinline attribute. Reviewers: efriedma, rnk, davide Reviewed By: efriedma Differential Revision: https://reviews.llvm.org/D42556 llvm-svn: 323619
* [SyntheticCounts] Rewrite the code using only graph traits.Easwaran Raman2018-01-251-74/+65
| | | | | | | | | | | | | | | | | | | Summary: The intent of this is to allow the code to be used with ThinLTO. In Thinlink phase, a traditional Callgraph can not be computed even though all the necessary information (nodes and edges of a call graph) is available. This is due to the fact that CallGraph class is closely tied to the IR. This patch first extends GraphTraits to add a CallGraphTraits graph. This is then used to implement a version of counts propagation on a generic callgraph. Reviewers: davidxl Subscribers: mehdi_amini, tejohnson, llvm-commits Differential Revision: https://reviews.llvm.org/D42311 llvm-svn: 323475
* Re-land "[ThinLTO] Add call edges' relative block frequency to per-module ↵Easwaran Raman2018-01-251-3/+18
| | | | | | | | | | | | | | summary." It was reverted after buildbot regressions. Original commit message: This allows relative block frequency of call edges to be passed to the thinlink stage where it will be used to compute synthetic entry counts of functions. llvm-svn: 323460
* Revert "[ThinLTO] Add call edges' relative block frequency to per-module ↵Easwaran Raman2018-01-241-18/+3
| | | | | | | | summary." Causes buildbot regressions. llvm-svn: 323358
* [ThinLTO] Add call edges' relative block frequency to per-module summary.Easwaran Raman2018-01-241-3/+18
| | | | | | | | | | | | | | | Summary: This allows relative block frequency of call edges to be passed to the thinlink stage where it will be used to compute synthetic entry counts of functions. Reviewers: tejohnson, pcc Subscribers: mehdi_amini, llvm-commits, inglorion Differential Revision: https://reviews.llvm.org/D42212 llvm-svn: 323349
* InstSimplify: If divisor element is undef simplify to undefZvi Rackover2018-01-241-2/+3
| | | | | | | | | | | | | | | | Summary: If any vector divisor element is undef, we can arbitrarily choose it be zero which would make the div/rem an undef value by definition. Reviewers: spatel, reames Reviewed By: spatel Subscribers: magabari, llvm-commits Differential Revision: https://reviews.llvm.org/D42485 llvm-svn: 323343
* [ValueTracking] add recursion depth param to matchSelectPattern Sanjay Patel2018-01-241-11/+18
| | | | | | | | | | | | | | | | | | | | | | | We're getting bug reports: https://bugs.llvm.org/show_bug.cgi?id=35807 https://bugs.llvm.org/show_bug.cgi?id=35840 https://bugs.llvm.org/show_bug.cgi?id=36045 ...where we blow up the stack in value tracking because other passes are sending in selects that have an operand that is itself the select. We don't currently have a reliable way to avoid analyzing dead code that may take non-standard forms, so bail out when things go too far. This mimics the recursion depth limitations in other parts of value tracking. Unfortunately, this pushes the underlying problems for other passes (jump-threading, simplifycfg, correlated-propagation) into hiding. If someone wants to uncover those again, the first draft of this patch on Phab would do that (it would assert rather than bail out). Differential Revision: https://reviews.llvm.org/D42442 llvm-svn: 323331
* Fix typos of occurred and occurrenceMalcolm Parsons2018-01-241-1/+1
| | | | llvm-svn: 323318
* [Analysis] Disable exp/exp2/pow finite lib calls on Android with -ffast-math.MinSeong Kim2018-01-231-0/+9
| | | | | | | | | | | | | | | | | | | Summary: Since r322087, glibc's finite lib calls are generated when possible. However, glibc is not supported on Android. Therefore this change enables llvm to finely distinguish between linux and Android for unsupported library calls. The change also include some regression tests. Reviewers: srhines, pirama Reviewed By: srhines Subscribers: kongyi, chh, javed.absar, llvm-commits Differential Revision: https://reviews.llvm.org/D42288 llvm-svn: 323187
* [InstSimplify] (X << Y) % X -> 0Anton Bikineev2018-01-231-0/+7
| | | | llvm-svn: 323182
* [ThinLTO] Re-commit of dot dumper after test fixEugene Leviant2018-01-221-1/+1
| | | | llvm-svn: 323116
* Revert [SCEV] Fix isLoopEntryGuardedByCond usageSerguei Katkov2018-01-221-10/+2
| | | | | | | It causes buildbot failures. New added assert is fired. It seems not all usages of isLoopEntryGuardedByCond are fixed. llvm-svn: 323079
* [SCEV] Fix isLoopEntryGuardedByCond usageSerguei Katkov2018-01-221-2/+10
| | | | | | | | | | | | ScalarEvolution::isKnownPredicate invokes isLoopEntryGuardedByCond without check that SCEV is available at entry point of the loop. It is incorrect and fixed by patch. Reviewers: sanjoy, mkazantsev, anna, dorit Reviewed By: mkazantsev Subscribers: llvm-commits Differential Revision: https://reviews.llvm.org/D42165 llvm-svn: 323077
* Temporarily revert r323062 to investigate buildbot failuresEugene Leviant2018-01-211-1/+1
| | | | llvm-svn: 323065
* [ThinLTO] Implement summary visualizerEugene Leviant2018-01-211-1/+1
| | | | | | Differential revision: https://reviews.llvm.org/D41297 llvm-svn: 323062
* [InstSimplify] use m_Specific and commutative matcher to reduce code; NFCISanjay Patel2018-01-191-9/+8
| | | | llvm-svn: 322955
* [ModRefInfo] Return NoModRef for Must and NoModRef.Alina Sbirlea2018-01-192-72/+81
| | | | | | | | | | | | | | Summary: In ModRefInfo "Must" was introduced to track presence of MustAlias, but we still want to return NoModRef when there is neither Mod or Ref, even when MustAlias is found. Patch has small fixes to ensure this happens. Minor cleanup to remove nesting for 2 if statements when calling getModRefInfo for 2 ImmutableCallSites. Reviewers: sanjoy Subscribers: jlebar, llvm-commits Differential Revision: https://reviews.llvm.org/D42209 llvm-svn: 322932
* Add a ProfileCount class to represent entry counts.Easwaran Raman2018-01-172-5/+5
| | | | | | | | | | | | | | Summary: The class wraps a uint64_t and an enum to represent the type of profile count (real and synthetic) with some helper methods. Reviewers: davidxl Subscribers: llvm-commits Differential Revision: https://reviews.llvm.org/D41883 llvm-svn: 322771
* Revert [PowerPC] This reverts commit rL322721Zaara Syeda2018-01-171-4/+0
| | | | | | Failing build bots. Revert the commit now. llvm-svn: 322748
* [MDA] Use common code instead of reimplementing same. [NFC]Philip Reames2018-01-171-10/+2
| | | | llvm-svn: 322747
* [PowerPC] Add handling for ColdCC calling convention and a pass to markZaara Syeda2018-01-171-0/+4
| | | | | | | | | | | | | | | | candidates with coldcc attribute. This patch adds support for the coldcc calling convention for Power. This changes the set of non-volatile registers. It includes a pass to stress test the implementation by marking all static directly called functions with the coldcc attribute through the option -enable-coldcc-stress-test. It also includes an option, -ppc-enable-coldcc, to add the coldcc attribute to functions which are cold at all call sites based on BlockFrequencyInfo when the containing function does not call any non cold functions. Differential Revision: https://reviews.llvm.org/D38413 llvm-svn: 322721
* [NFC] fix trivial typos in commentsHiroshi Inoue2018-01-171-1/+1
| | | | | | "the the" -> "the" llvm-svn: 322636
* [GlobalsAA] Don't let dbg intrinsics affect analysis resultMikael Holmen2018-01-151-0/+4
| | | | | | | | | | | | | | | | | | Summary: This fixes PR35899. Debug info intrinsics shouldn't affect code generation so ignore them in GlobalsAA. Reviewers: hfinkel, aprantl Reviewed By: aprantl Subscribers: aprantl, llvm-commits Differential Revision: https://reviews.llvm.org/D41984 llvm-svn: 322470
* [BasicAA] Stop crashing when dealing with pointers > 64 bits.Davide Italiano2018-01-151-0/+7
| | | | | | | | | | | | | | | An alternative (and probably better) fix would be that of making `Scale` an APInt, and there's a patch floating around to do this. As we're still discussing it, at least stop crashing in the meanwhile (added bonus, we now have a regression test for this situation). Fixes PR35843. Thanks to Eli for suggesting the fix and Simon for reporting and reducing the bug. llvm-svn: 322467
* [InstSimplify] fix code comments; NFCSanjay Patel2018-01-141-8/+8
| | | | llvm-svn: 322456
* [InstSimplify] fold implied null ptr check (PR35790)Sanjay Patel2018-01-131-15/+16
| | | | | | | | | | | | | | | | | | | | | | | | | | | This extends rL322327 to handle the pointer cast and should solve: https://bugs.llvm.org/show_bug.cgi?id=35790 Name: or_eq_zero %isnull = icmp eq i64* %p, null %x = ptrtoint i64* %p to i64 %somebits = and i64 %x, %y %somebits_are_zero = icmp eq i64 %somebits, 0 %or = or i1 %somebits_are_zero, %isnull => %or = %somebits_are_zero Name: and_ne_zero %isnotnull = icmp ne i64* %p, null %x = ptrtoint i64* %p to i64 %somebits = and i64 %x, %y %somebits_are_not_zero = icmp ne i64 %somebits, 0 %and = and i1 %somebits_are_not_zero, %isnotnull => %and = %somebits_are_not_zero https://rise4fun.com/Alive/CQ3 llvm-svn: 322439
* [InstSimplify] fold implied cmp with zero (PR35790)Sanjay Patel2018-01-111-0/+42
| | | | | | | | | This doesn't handle the more complicated case in the bug report yet: https://bugs.llvm.org/show_bug.cgi?id=35790 For that, we have to match / look through a cast. llvm-svn: 322327
* [ValueTracking] recognize min/max-of-min/max with notted ops (PR35875)Sanjay Patel2018-01-111-12/+31
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This was originally planned as the fix for: https://bugs.llvm.org/show_bug.cgi?id=35834 ...but simpler transforms handled that case, so I implemented a lesser solution. It turns out we need to handle the case with 'not' ops too because the real code example that we are trying to solve: https://bugs.llvm.org/show_bug.cgi?id=35875 ...has extra uses of the intermediate values, so we can't rely on smaller canonicalizations to get us to the goal. As with rL321672, I've tried to show every possibility in the codegen tests because that's the simplest way to prove we're doing the right thing in the wide variety of permutations of this pattern. We can also show an InstCombine win because we added a fold for this case in: rL321998 / D41603 An Alive proof for one variant of the pattern to show that the InstCombine and codegen results are correct: https://rise4fun.com/Alive/vd1 Name: min3_nots %nx = xor i8 %x, -1 %ny = xor i8 %y, -1 %nz = xor i8 %z, -1 %cmpxz = icmp slt i8 %nx, %nz %minxz = select i1 %cmpxz, i8 %nx, i8 %nz %cmpyz = icmp slt i8 %ny, %nz %minyz = select i1 %cmpyz, i8 %ny, i8 %nz %cmpyx = icmp slt i8 %y, %x %r = select i1 %cmpyx, i8 %minxz, i8 %minyz => %cmpxyz = icmp slt i8 %minxz, %ny %r = select i1 %cmpxyz, i8 %minxz, i8 %ny Name: min3_nots_alt %nx = xor i8 %x, -1 %ny = xor i8 %y, -1 %nz = xor i8 %z, -1 %cmpxz = icmp slt i8 %nx, %nz %minxz = select i1 %cmpxz, i8 %nx, i8 %nz %cmpyz = icmp slt i8 %ny, %nz %minyz = select i1 %cmpyz, i8 %ny, i8 %nz %cmpyx = icmp slt i8 %y, %x %r = select i1 %cmpyx, i8 %minxz, i8 %minyz => %xz = icmp sgt i8 %x, %z %maxxz = select i1 %xz, i8 %x, i8 %z %xyz = icmp sgt i8 %maxxz, %y %maxxyz = select i1 %xyz, i8 %maxxz, i8 %y %r = xor i8 %maxxyz, -1 llvm-svn: 322283
* Avoid inlining if there is byval arguments with non-alloca address spaceBjorn Pettersson2018-01-101-0/+13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: After teaching InlineCost more about address spaces () another fault was detected in the inliner. If an argument has the byval attribute the parameter might be copied to an alloca. That part seems to work fine even if the argument has a different address space than the alloca address space. However, if the address spaces differ, then the inlined function still might refer to the parameter using the original address space (the inliner does not handle that situation very well). This patch avoids the problem by simply disallowing inlining when there are byval arguments with address space that differs from the alloca address space. I'm not really sure how to transform the code if we want to get inlining for this situation. I assume that it never has been working, and that the fixes in r321809 just exposed an old problem. Fault found by skatkov (Serguei Katkov). It is mentioned in follow up comments to https://reviews.llvm.org/D40455. Reviewers: skatkov Reviewed By: skatkov Subscribers: uabelho, eraman, llvm-commits, haicheng Differential Revision: https://reviews.llvm.org/D41898 llvm-svn: 322181
* Add a pass to generate synthetic function entry counts.Easwaran Raman2018-01-092-0/+123
| | | | | | | | | | | | | | | | | | | | | | | | Summary: This pass synthesizes function entry counts by traversing the callgraph and using the relative block frequencies of the callsites. The intended use of these counts is in inlining to determine hot/cold callsites in the absence of profile information. The pass is split into two files with the code that propagates the counts in a callgraph in a Utils file. I plan to add support for propagation in the thinlto link phase and the propagation code will be shared and hence this split. I did not add support to the old PM since hot callsite determination in inlining is not possible in old PM (although we could use hot callee heuristic with synthetic counts in the old PM it is not worth the effort tuning it) Reviewers: davidxl, silvas Subscribers: mgorny, mehdi_amini, llvm-commits Differential Revision: https://reviews.llvm.org/D41604 llvm-svn: 322110
OpenPOWER on IntegriCloud