summaryrefslogtreecommitdiffstats
path: root/llvm/test/Analysis/ConstantFolding
Commit message (Collapse)AuthorAgeFilesLines
* [ConstantFold][SVE] Fix constant folding for shufflevector.Eli Friedman2019-12-091-0/+11
| | | | | | | Don't try to fold away shuffles which can't be folded. Fix creation of shufflevector constant expressions. Differential Revision: https://reviews.llvm.org/D71147
* [ConstantFold][SVE] Skip scalable vectors in ↵Huihui Zhang2019-12-051-0/+19
| | | | | | | | | | | | | | | | | ConstantFoldInsertElementInstruction. Summary: Should not constant fold insertelement instruction for scalable vector type. Reviewers: huntergr, sdesmalen, spatel, levedev.ri, apazos, efriedma, willlovett Reviewed By: efriedma, spatel Subscribers: tschuett, hiraditya, rkruppe, psnobl, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D70985
* [ConstFolding] move tests for copysign; NFCSanjay Patel2019-11-261-0/+53
| | | | InstCombine doesn't have any transforms for copysign currently.
* [ConstantFold] Handle identity folds at top of ConstantFoldBinaryInstFlorian Hahn2019-11-171-4/+4
| | | | | | | | | | | | | | | | | | | | | | Currently we miss folds with undef and identity values for binary ops that do not fold to undef in general. We can generalize the identity simplifications and do them before checking for undef in particular. Alive checks: * OR - https://rise4fun.com/Alive/8OsK * AND - https://rise4fun.com/Alive/e3tE This will also allow us to remove some now redundant cases throughout the function, but I would like to do this as follow-up. That should make tracking down potential issues easier. Reviewers: spatel, RKSimon, lebedev.ri Reviewed By: spatel Differential Revision: https://reviews.llvm.org/D70169
* [ConstantFold] Add some tests for binops with constants and undefs.Florian Hahn2019-11-171-0/+50
| | | | Precommit tests for D70169.
* [ConstantFold] Fold extractelement of getelementptrJay Foad2019-10-281-1/+1
| | | | | | | | | | | | | | Summary: Getelementptr has vector type if any of its operands are vectors (the scalar operands being implicitly broadcast to all vector elements). Extractelement applied to a vector getelementptr can be folded by applying the extractelement in turn to all of the vector operands. Subscribers: hiraditya, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D69379
* [ConstantFolding] Fold constant calls to log2()Evandro Menezes2019-09-301-6/+4
| | | | | | | | Somehow, folding calls to `log2()` with a constant was missing. Differential revision: https://reviews.llvm.org/D67300 llvm-svn: 373262
* [ConstantFolding] Expand folding of some library functionsEvandro Menezes2019-09-123-0/+306
| | | | | | | | | Expanding the folding of `nearbyint()`, `rint()` and `trunc()` to library functions, in addition to the current support for intrinsics. Differential revision: https://reviews.llvm.org/D67468 llvm-svn: 371774
* [ConstantFolding] Add new test cases for transcendentals (NFC)Evandro Menezes2019-09-062-0/+245
| | | | llvm-svn: 371246
* Fix pointer width in test from r366754.Peter Collingbourne2019-07-221-2/+2
| | | | llvm-svn: 366764
* Analysis: Don't look through aliases when simplifying GEPs.Peter Collingbourne2019-07-221-0/+17
| | | | | | | | | | | | | | | | | | | | It is not safe in general to replace an alias in a GEP with its aliasee if the alias can be replaced with another definition (i.e. via strong/weak resolution (linkonce_odr) or via symbol interposition (default visibility in ELF)) while the aliasee cannot. An example of how this can go wrong is in the included test case. I was concerned that this might be a load-bearing misoptimization (it's possible for us to use aliases to share vtables between base and derived classes, and on Windows, vtable symbols will always be aliases in RTTI mode, so this change could theoretically inhibit trivial devirtualization in some cases), so I built Chromium for Linux and Windows with and without this change. The file sizes of the resulting binaries were identical, so it doesn't look like this is going to be a problem. Differential Revision: https://reviews.llvm.org/D65118 llvm-svn: 366754
* [ConstantFolding] Add constant folding for smul.fix and smul.fix.satBjorn Pettersson2019-06-192-0/+244
| | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: This patch teaches ConstantFolding to constant fold both scalar and vector variants of llvm.smul.fix and llvm.smul.fix.sat. As described in the LangRef rounding is unspecified for these instrinsics. If the result cannot be represented exactly the default behavior in ConstantFolding is to round down towards negative infinity. If a target has a preferred rounding that is different some kind of target hook would be needed (same strategy as used by the SelectionDAG legalizer). Reviewers: nikic, leonardchan, RKSimon Reviewed By: leonardchan Subscribers: hiraditya, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D63385 llvm-svn: 363811
* Add FNeg support to InstructionSimplifyCameron McInally2019-05-061-11/+0
| | | | | | Differential Revision: https://reviews.llvm.org/D61573 llvm-svn: 360053
* Precommit an FNeg InstructionSimplify test.Cameron McInally2019-05-051-0/+11
| | | | llvm-svn: 359990
* Add FNeg IR constant folding supportCameron McInally2019-05-051-0/+42
| | | | llvm-svn: 359982
* [ConstantFolding] Fold undef for integer intrinsicsNikita Popov2019-01-113-104/+52
| | | | | | | | | | | | | | | | | | | This fixes https://bugs.llvm.org/show_bug.cgi?id=40110. This implements handling of undef operands for integer intrinsics in ConstantFolding, in particular for the bitcounting intrinsics (ctpop, cttz, ctlz), the with.overflow intrinsics, the saturating math intrinsics and the funnel shift intrinsics. The undef behavior follows what InstSimplify does for the general cas e of non-constant operands. For the bitcount intrinsics (where InstSimplify doesn't do undef handling -- there cannot be a combination of an undef + non-constant operand) I'm using a 0 result if the intrinsic is defined for zero and undef otherwise. Differential Revision: https://reviews.llvm.org/D55950 llvm-svn: 350971
* Revert patches 348835 and 348571 because they'reRanjeet Singh2019-01-041-27/+0
| | | | | | causing code size performance regressions. llvm-svn: 350402
* [ConstantFolding] Consolidate and extend bitcount intrinsic tests; NFCNikita Popov2018-12-201-0/+187
| | | | | | | Move constant folding tests into ConstantFolding/bitcount.ll and drop various tests in other places. Add coverage for undefs. llvm-svn: 349806
* [ConstantFolding] Add tests for funnel shifts with undef operands; NFCNikita Popov2018-12-201-0/+167
| | | | llvm-svn: 349803
* [ConstantFolding] Add tests for sat add/sub with undefs; NFCNikita Popov2018-12-201-0/+218
| | | | llvm-svn: 349802
* [ConstantFolding] Split up saturating add/sub tests; NFCNikita Popov2018-12-201-97/+158
| | | | | | Split each test into a separate function. llvm-svn: 349801
* Cleanup test case by removing unused attribute dso_localRanjeet Singh2018-12-111-4/+4
| | | | | | | | | Attribute 'dso_local' generated in bitcode from compiling original C file but isn't needed. Differential Revision: https://reviews.llvm.org/D55521 llvm-svn: 348835
* [IR] Don't assume all functions are 4 byte alignedRanjeet Singh2018-12-071-0/+27
| | | | | | | | | | | | | | | | | In some cases different alignments for function might be used to save space e.g. thumb mode with -Oz will try to use 2 byte function alignment. Similar patch that fixed this in other areas exists here https://reviews.llvm.org/D46110 This was approved previously https://reviews.llvm.org/D55115 (r348215) but when committed it caused failures on the sanitizer buildbots when building llvm with clang (containing this patch). This is now fixed because I've added a check to see if getting the parent module returns null if it does then set the alignment to 0. Differential Revision: https://reviews.llvm.org/D55115 llvm-svn: 348571
* Reverting r348215Ranjeet Singh2018-12-041-27/+0
| | | | | | Causing failures on ubsan buildbot boxes. llvm-svn: 348230
* [IR] Don't assume all functions are 4 byte alignedRanjeet Singh2018-12-041-0/+27
| | | | | | | | | | | In some cases different alignments for function might be used to save space e.g. thumb mode with -Oz will try to use 2 byte function alignment. Similar patch that fixed this in other areas exists here https://reviews.llvm.org/D46110 Differential Revision: https://reviews.llvm.org/D55115 llvm-svn: 348215
* [ConstantFolding] Add support for saturating add/subSanjay Patel2018-11-201-0/+111
| | | | | | | | | | Support saturating add/sub in constant folding, based on the APInt methods introduced in D54332. Patch by: @nikic (Nikita Popov) Differential Revision: https://reviews.llvm.org/D54531 llvm-svn: 347328
* [ConstantFolding] Constant fold minimum and maximum intrinsicsThomas Lively2018-10-191-0/+136
| | | | | | | | | | | | Summary: Depends on D52764 Reviewers: aheejin, dschuff Subscribers: llvm-commits Differential Revision: https://reviews.llvm.org/D52765 llvm-svn: 344796
* Prevent Constant Folding From Optimizing inrange GEPPeter Collingbourne2018-09-111-15/+12
| | | | | | | | | | | | | | | This patch does the following things: 1. update SymbolicallyEvaluateGEP so that it bails out if it cannot preserve inrange arribute; 2. update llvm/test/Analysis/ConstantFolding/gep.ll to remove UB in it; 3. remove inaccurate comment above ConstantFoldInstOperandsImpl in llvm/lib/Analysis/ConstantFolding.cpp; 4. add a new regression test that makes sure that no optimizations change an inrange GEP in an unexpected way. Patch by Zhaomo Yang! Differential Revision: https://reviews.llvm.org/D51698 llvm-svn: 341888
* [InstCombine] remove unnecessary shuffle undef foldingSanjay Patel2018-08-291-0/+8
| | | | | | | | Add a test for constant folding to show that (shuffle undef, undef, mask) should already be handled via instsimplify. llvm-svn: 340926
* [ConstantFolding] improve folding of binops with vector undef operandSanjay Patel2018-08-201-8/+8
| | | | | | | A non-undef operand may still have undef constant elements, so we should always propagate the vector results per-lane. llvm-svn: 340194
* [ConstantFolding] add tests for binops on vectors with undef elements; NFCSanjay Patel2018-08-201-0/+61
| | | | llvm-svn: 340190
* [ConstantFolding] add simplifications for funnel shift intrinsicsSanjay Patel2018-08-171-12/+6
| | | | | | | | | | | This is another step towards being able to canonicalize to the funnel shift intrinsics in IR (see D49242 for the initial patch). We should not have any loss of simplification power in IR between these and the equivalent IR constructs. Differential Revision: https://reviews.llvm.org/D50848 llvm-svn: 340022
* [ConstantFolding] add tests for funnel shift intrinsics; NFCSanjay Patel2018-08-161-0/+89
| | | | | | No functionality for this yet. llvm-svn: 339889
* [ConstantFold] Disallow folding vector geps into bitcastsKarl-Johan Karlsson2018-06-011-3/+3
| | | | | | | | | | | | | | | | | | | Summary: Getelementptr returns a vector of pointers, instead of a single address, when one or more of its arguments is a vector. In such case it is not possible to simplify the expression by inserting a bitcast of operand(0) into the destination type, as it will create a bitcast between different sizes. Reviewers: majnemer, mkuper, mssimpso, spatel Reviewed By: spatel Subscribers: lebedev.ri, llvm-commits Differential Revision: https://reviews.llvm.org/D46379 llvm-svn: 333783
* [ConstantFold] Add lit testcase for bitcast problem. NFCKarl-Johan Karlsson2018-06-011-0/+29
| | | | llvm-svn: 333767
* [ConstantFolding, InstSimplify] Handle more vector GEPsMatthew Simpson2018-03-151-0/+26
| | | | | | | | | | This patch addresses some additional cases where the compiler crashes upon encountering vector GEPs. This should fix PR36116. Differential Revision: https://reviews.llvm.org/D44219 Reference: https://bugs.llvm.org/show_bug.cgi?id=36116 llvm-svn: 327638
* Remove alignment argument from memcpy/memmove/memset in favour of alignment ↵Daniel Neilson2018-01-191-3/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | attributes (Step 1) Summary: This is a resurrection of work first proposed and discussed in Aug 2015: http://lists.llvm.org/pipermail/llvm-dev/2015-August/089384.html and initially landed (but then backed out) in Nov 2015: http://lists.llvm.org/pipermail/llvm-commits/Week-of-Mon-20151109/312083.html The @llvm.memcpy/memmove/memset intrinsics currently have an explicit argument which is required to be a constant integer. It represents the alignment of the dest (and source), and so must be the minimum of the actual alignment of the two. This change is the first in a series that allows source and dest to each have their own alignments by using the alignment attribute on their arguments. In this change we: 1) Remove the alignment argument. 2) Add alignment attributes to the source & dest arguments. We, temporarily, require that the alignments for source & dest be equal. For example, code which used to read: call void @llvm.memcpy.p0i8.p0i8.i32(i8* %dest, i8* %src, i32 100, i32 4, i1 false) will now read call void @llvm.memcpy.p0i8.p0i8.i32(i8* align 4 %dest, i8* align 4 %src, i32 100, i1 false) Downstream users may have to update their lit tests that check for @llvm.memcpy/memmove/memset call/declaration patterns. The following extended sed script may help with updating the majority of your tests, but it does not catch all possible patterns so some manual checking and updating will be required. s~declare void @llvm\.mem(set|cpy|move)\.p([^(]*)\((.*), i32, i1\)~declare void @llvm.mem\1.p\2(\3, i1)~g s~call void @llvm\.memset\.p([^(]*)i8\(i8([^*]*)\* (.*), i8 (.*), i8 (.*), i32 [01], i1 ([^)]*)\)~call void @llvm.memset.p\1i8(i8\2* \3, i8 \4, i8 \5, i1 \6)~g s~call void @llvm\.memset\.p([^(]*)i16\(i8([^*]*)\* (.*), i8 (.*), i16 (.*), i32 [01], i1 ([^)]*)\)~call void @llvm.memset.p\1i16(i8\2* \3, i8 \4, i16 \5, i1 \6)~g s~call void @llvm\.memset\.p([^(]*)i32\(i8([^*]*)\* (.*), i8 (.*), i32 (.*), i32 [01], i1 ([^)]*)\)~call void @llvm.memset.p\1i32(i8\2* \3, i8 \4, i32 \5, i1 \6)~g s~call void @llvm\.memset\.p([^(]*)i64\(i8([^*]*)\* (.*), i8 (.*), i64 (.*), i32 [01], i1 ([^)]*)\)~call void @llvm.memset.p\1i64(i8\2* \3, i8 \4, i64 \5, i1 \6)~g s~call void @llvm\.memset\.p([^(]*)i128\(i8([^*]*)\* (.*), i8 (.*), i128 (.*), i32 [01], i1 ([^)]*)\)~call void @llvm.memset.p\1i128(i8\2* \3, i8 \4, i128 \5, i1 \6)~g s~call void @llvm\.memset\.p([^(]*)i8\(i8([^*]*)\* (.*), i8 (.*), i8 (.*), i32 ([0-9]*), i1 ([^)]*)\)~call void @llvm.memset.p\1i8(i8\2* align \6 \3, i8 \4, i8 \5, i1 \7)~g s~call void @llvm\.memset\.p([^(]*)i16\(i8([^*]*)\* (.*), i8 (.*), i16 (.*), i32 ([0-9]*), i1 ([^)]*)\)~call void @llvm.memset.p\1i16(i8\2* align \6 \3, i8 \4, i16 \5, i1 \7)~g s~call void @llvm\.memset\.p([^(]*)i32\(i8([^*]*)\* (.*), i8 (.*), i32 (.*), i32 ([0-9]*), i1 ([^)]*)\)~call void @llvm.memset.p\1i32(i8\2* align \6 \3, i8 \4, i32 \5, i1 \7)~g s~call void @llvm\.memset\.p([^(]*)i64\(i8([^*]*)\* (.*), i8 (.*), i64 (.*), i32 ([0-9]*), i1 ([^)]*)\)~call void @llvm.memset.p\1i64(i8\2* align \6 \3, i8 \4, i64 \5, i1 \7)~g s~call void @llvm\.memset\.p([^(]*)i128\(i8([^*]*)\* (.*), i8 (.*), i128 (.*), i32 ([0-9]*), i1 ([^)]*)\)~call void @llvm.memset.p\1i128(i8\2* align \6 \3, i8 \4, i128 \5, i1 \7)~g s~call void @llvm\.mem(cpy|move)\.p([^(]*)i8\(i8([^*]*)\* (.*), i8([^*]*)\* (.*), i8 (.*), i32 [01], i1 ([^)]*)\)~call void @llvm.mem\1.p\2i8(i8\3* \4, i8\5* \6, i8 \7, i1 \8)~g s~call void @llvm\.mem(cpy|move)\.p([^(]*)i16\(i8([^*]*)\* (.*), i8([^*]*)\* (.*), i16 (.*), i32 [01], i1 ([^)]*)\)~call void @llvm.mem\1.p\2i16(i8\3* \4, i8\5* \6, i16 \7, i1 \8)~g s~call void @llvm\.mem(cpy|move)\.p([^(]*)i32\(i8([^*]*)\* (.*), i8([^*]*)\* (.*), i32 (.*), i32 [01], i1 ([^)]*)\)~call void @llvm.mem\1.p\2i32(i8\3* \4, i8\5* \6, i32 \7, i1 \8)~g s~call void @llvm\.mem(cpy|move)\.p([^(]*)i64\(i8([^*]*)\* (.*), i8([^*]*)\* (.*), i64 (.*), i32 [01], i1 ([^)]*)\)~call void @llvm.mem\1.p\2i64(i8\3* \4, i8\5* \6, i64 \7, i1 \8)~g s~call void @llvm\.mem(cpy|move)\.p([^(]*)i128\(i8([^*]*)\* (.*), i8([^*]*)\* (.*), i128 (.*), i32 [01], i1 ([^)]*)\)~call void @llvm.mem\1.p\2i128(i8\3* \4, i8\5* \6, i128 \7, i1 \8)~g s~call void @llvm\.mem(cpy|move)\.p([^(]*)i8\(i8([^*]*)\* (.*), i8([^*]*)\* (.*), i8 (.*), i32 ([0-9]*), i1 ([^)]*)\)~call void @llvm.mem\1.p\2i8(i8\3* align \8 \4, i8\5* align \8 \6, i8 \7, i1 \9)~g s~call void @llvm\.mem(cpy|move)\.p([^(]*)i16\(i8([^*]*)\* (.*), i8([^*]*)\* (.*), i16 (.*), i32 ([0-9]*), i1 ([^)]*)\)~call void @llvm.mem\1.p\2i16(i8\3* align \8 \4, i8\5* align \8 \6, i16 \7, i1 \9)~g s~call void @llvm\.mem(cpy|move)\.p([^(]*)i32\(i8([^*]*)\* (.*), i8([^*]*)\* (.*), i32 (.*), i32 ([0-9]*), i1 ([^)]*)\)~call void @llvm.mem\1.p\2i32(i8\3* align \8 \4, i8\5* align \8 \6, i32 \7, i1 \9)~g s~call void @llvm\.mem(cpy|move)\.p([^(]*)i64\(i8([^*]*)\* (.*), i8([^*]*)\* (.*), i64 (.*), i32 ([0-9]*), i1 ([^)]*)\)~call void @llvm.mem\1.p\2i64(i8\3* align \8 \4, i8\5* align \8 \6, i64 \7, i1 \9)~g s~call void @llvm\.mem(cpy|move)\.p([^(]*)i128\(i8([^*]*)\* (.*), i8([^*]*)\* (.*), i128 (.*), i32 ([0-9]*), i1 ([^)]*)\)~call void @llvm.mem\1.p\2i128(i8\3* align \8 \4, i8\5* align \8 \6, i128 \7, i1 \9)~g The remaining changes in the series will: Step 2) Expand the IRBuilder API to allow creation of memcpy/memmove with differing source and dest alignments. Step 3) Update Clang to use the new IRBuilder API. Step 4) Update Polly to use the new IRBuilder API. Step 5) Update LLVM passes that create memcpy/memmove calls to use the new IRBuilder API, and those that use use MemIntrinsicInst::[get|set]Alignment() to use getDestAlignment() and getSourceAlignment() instead. Step 6) Remove the single-alignment IRBuilder API for memcpy/memmove, and the MemIntrinsicInst::[get|set]Alignment() methods. Reviewers: pete, hfinkel, lhames, reames, bollu Reviewed By: reames Subscribers: niosHD, reames, jholewinski, qcolombet, jfb, sanjoy, arsenm, dschuff, dylanmckay, mehdi_amini, sdardis, nemanjai, david2050, nhaehnle, javed.absar, sbc100, jgravelle-google, eraman, aheejin, kbarton, JDevlieghere, asb, rbar, johnrusso, simoncook, jordy.potman.lists, apazos, sabuasal, llvm-commits Differential Revision: https://reviews.llvm.org/D41675 llvm-svn: 322965
* [ConstantFolding] Avoid assert when folding ptrtoint of vectorized GEPBjorn Pettersson2017-10-241-0/+32
| | | | | | | | | | | | | | | | | | | | | | | | | Summary: Got asserts in llvm::CastInst::getCastOpcode saying: `DestBits == SrcBits && "Illegal cast to vector (wrong type or size)"' failed. Problem seemed to be that llvm::ConstantFoldCastInstruction did not handle ptrtoint cast of a getelementptr returning a vector correctly. I assume such situations are quite rare, since the GEP needs to be considered as a constant value (base pointer being null). The solution used here is to simply avoid the constant fold of ptrtoint when the value is a vector. It is not supported, and by bailing out we do not fail on assertions later on. Reviewers: craig.topper, majnemer, davide, filcab, efriedma Reviewed By: efriedma Subscribers: efriedma, filcab, llvm-commits Differential Revision: https://reviews.llvm.org/D38546 llvm-svn: 316430
* [InstSimplify] Constant fold the new GEP in SimplifyGEPInst.Joey Gouly2017-06-061-1/+1
| | | | llvm-svn: 304784
* [ConstantFolding] Fix to prevent constant folding having to repeatedly scan ↵David Green2017-03-211-0/+73
| | | | | | | | | | | | | operands. NFCI After the loop unroll threshold was increased in r295538, very large constant expressions can be created. This prevents them from having to be recursively scanned, leading to a compile time blow-up. Differential Revision: https://reviews.llvm.org/D30689 llvm-svn: 298356
* [ConstantFold] Fix defect in constant folding computation for GEPJaved Absar2017-03-081-0/+52
| | | | | | | | | | | | | | | | | When the array indexes are all determined by GVN to be constants, a call is made to constant-folding to optimize/simplify the address computation. The constant-folding, however, makes a mistake in that it sometimes reads back stale Idxs instead of NewIdxs, that it re-computed in previous iteration. This leads to incorrect addresses coming out of constant-folding to GEP. A test case is included. The error is only triggered when indexes have particular patterns that the stale/new index updates interplay matters. Reviewers: Daniel Berlin Differential Revision: https://reviews.llvm.org/D30642 llvm-svn: 297317
* [ConstantFolding] Fix vector GEPs harderMichael Kuperstein2016-12-211-0/+21
| | | | | | | | | | For vector GEPs, CastGEPIndices can end up in an infinite recursion, because we compare the vector type to the scalar pointer type, find them different, and then try to cast a type to itself. Differential Revision: https://reviews.llvm.org/D28009 llvm-svn: 290260
* ConstantFolding: Don't crash when encountering vector GEPKeno Fischer2016-12-081-0/+19
| | | | | | | | | | | | | | ConstantFolding tried to cast one of the scalar indices to a vector type. Instead, use the vector type only for the first index (which is the only one allowed to be a vector) and use its scalar type otherwise. Fixes PR31250. Reviewers: majnemer Differential Revision: https://reviews.llvm.org/D27389 llvm-svn: 289073
* IR: Introduce inrange attribute on getelementptr indices.Peter Collingbourne2016-11-101-0/+30
If the inrange keyword is present before any index, loading from or storing to any pointer derived from the getelementptr has undefined behavior if the load or store would access memory outside of the bounds of the element selected by the index marked as inrange. This can be used, e.g. for alias analysis or to split globals at element boundaries where beneficial. As previously proposed on llvm-dev: http://lists.llvm.org/pipermail/llvm-dev/2016-July/102472.html Differential Revision: https://reviews.llvm.org/D22793 llvm-svn: 286514
OpenPOWER on IntegriCloud