summaryrefslogtreecommitdiffstats
path: root/llvm/lib
Commit message (Collapse)AuthorAgeFilesLines
* [InstCombine] Extra null-checking on TFE/LWE supportMichael Liao2019-02-011-4/+3
| | | | | | | | - If that operand is not ConstantInt, skip enabling TFE/LWE. Differential Revision: https://reviews.llvm.org/D57539 llvm-svn: 352904
* test commit (add blank line) NFCRoland Froese2019-02-011-0/+1
| | | | llvm-svn: 352897
* [DWARF v5] Fix DWARF emitter and consumer to produce/expect a uleb for a ↵Wolfgang Pieb2019-02-012-3/+6
| | | | | | | | | | location description's length. Reviewer: davide, JDevliegere Differential Revision: https://reviews.llvm.org/D57550 llvm-svn: 352889
* [AMDGPU] Fix for vector element insertionTim Corringham2019-02-011-5/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: Incorrect code was generated when lowering insertelement operations for vectors with 8 or 16 bit elements. The value being inserted was not adjusted for the position of the element within the 32 bit word and so only the low element within each 32 bit word could receive the intended value. Fixed by simply replicating the value to each element of a congruent vector before the mask and or operation used to update the intended element. A number of affected LIT tests have been updated appropriately. before the mask & or into the intended Reviewers: arsenm, nhaehnle Reviewed By: arsenm Subscribers: llvm-commits, arsenm, kzhuravl, jvesely, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye Tags: #llvm Differential Revision: https://reviews.llvm.org/D57588 llvm-svn: 352885
* [SDAG] improve variable names; NFCSanjay Patel2019-02-011-23/+22
| | | | | | | | The version of FoldConstantArithmetic() that takes arbitrary nodes was confusingly naming those nodes as constants when they might not be; also "Cst" reads like "Cast". llvm-svn: 352884
* [X86][SSE] Use PSLLDQ/PSRLDQ to mask out zeroable ends of a shuffleSimon Pilgrim2019-02-011-0/+73
| | | | | | | | | | As suggested on PR40318, this patch uses PSLLDQ/PSRLDQ to lower shuffles to zero out the ends of a vector, leaving a sequential inner section. For pre-SSSE3 we do this for shuffles with zeros at either end (requiring up to 3 shifts), but once PSHUFB is available I've limited this to shuffles with a single zeroable end (2 shifts). Differential Revision: https://reviews.llvm.org/D56784 llvm-svn: 352883
* [TargetLowering] try harder to determine undef elements of vector binopsSanjay Patel2019-02-011-7/+61
| | | | | | | | | | | | | | | | | | | | | | | | | | | | This might be the start of tracking all vector element constants generally if we take it to its logical conclusion, but let's stop here and make sure this is correct/beneficial so far. The affected tests require a convoluted path before they get simplified currently because we don't call SimplifyDemandedVectorElts() from binops directly and don't modify the binop operands directly in SimplifyDemandedVectorElts(). That's why the tests all have a trailing shuffle to induce a chain reaction of transforms. So something like this is happening: 1. Improve the knowledge of undefs in the binop via a SimplifyDemandedVectorElts() call that originates from a shuffle. 2. Transfer that undef knowledge back to the shuffle mask user as more undef lanes. 3. Combine the modified shuffle by calling SimplifyDemandedVectorElts() again. 4. Translate the improved shuffle mask as undemanded lanes of build vector constants causing those to become full undef constants. 5. Simplify the binop now that it has a full undef operand. As we can see from the unchanged 'and' and 'or' tests, tracking undefs alone isn't a full solution. We would need to track zero and all-ones constants to improve those opcodes. We'd probably need to track NaN for FP ops too (assuming we don't have fast-math-flags set). Differential Revision: https://reviews.llvm.org/D57066 llvm-svn: 352880
* [X86][AVX] Combine INSERT_SUBVECTOR(SRC0, ↵Simon Pilgrim2019-02-011-3/+4
| | | | | | | | | | BITCAST(SHUFFLE(EXTRACT_SUBVECTOR(SRC1))) Enable peeking through one use bitcasts to the subvector shuffle. This still depends on the subvector being the same scalar-size but D57514 has already helped with the more tricky patterns llvm-svn: 352879
* [InstCombine] reduce duplicate code; NFCSanjay Patel2019-02-011-2/+1
| | | | | | | | | An unused variable problem was introduced with rL352870 and stubbed out with rL352871, but we can make a better fix by actually using the local variable in code rather than just the assert. llvm-svn: 352873
* [InstCombine] Fix -Wunused-variable when -DLLVM_ENABLE_ASSERTIONS=offFangrui Song2019-02-011-0/+1
| | | | llvm-svn: 352871
* [InstCombine] try to reduce x86 addcarry to generic uaddo intrinsicSanjay Patel2019-02-011-0/+33
| | | | | | | | | | | | | | | | | | | If we can reduce the x86-specific intrinsic to the generic op, it allows existing simplifications and value tracking folds. AFAICT, this always results in identical x86 codegen in the non-reduced case...which should be true because we semi-generically (too aggressively IMO) convert to llvm.uadd.with.overflow in CGP, so the DAG/isel must already combine/lower this intrinsic as expected. This isn't quite what was requested in: https://bugs.llvm.org/show_bug.cgi?id=40486 ...but we want to have these kinds of folds early for efficiency and to enable greater simplifications. For the case in the bug report where we have: _addcarry_u64(0, ahi, 0, &ahi) ...this gets completely simplified away in IR. Differential Revision: https://reviews.llvm.org/D57453 llvm-svn: 352870
* [AArch64] Optimize floating point materializationAdhemerval Zanella2019-02-012-28/+23
| | | | | | | | | | | | | | | This patch changes isFPImmLegal to return if the value can be enconded as the immediate operand of a logical instruction besides checking if for immediate field for fmov. This optimizes some floating point materization, inclusive values used on isinf lowering. Reviewed By: rengolin, efriedma, evandro Differential Revision: https://reviews.llvm.org/D57044 llvm-svn: 352866
* [X86][BdVer2] Transfer delays from the integer to the floating point unit.Roman Lebedev2019-02-011-1/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: I'm unable to find this number in the "AMD SOG for family 15h". llvm-exegesis measures the latencies of these instructions as `2`, which matches the latencies specified in "AMD SOG for family 15h". However if we look at Agner, Microarchitecture, "AMD Bulldozer, Piledriver, Steamroller and Excavator pipeline", "Data delay between different execution domains", the int->ivec transfer is listed as `8`..`10`cy of additional latency. Also, Agner's "Instruction tables", for Piledriver, lists their latencies as `12`, which is consistent with `2cy` from exegesis / AMD SOG + `10cy` transfer delay. Additional data point comes from the fact that Agner's "Instruction tables", for Jaguar, lists their latencies as `8`; and "AMD SOG for family 16h" does state the `+6cy` int->ivec delay, which is consistent with instr latency of `1` or `2`. Reviewers: andreadb, RKSimon, craig.topper Reviewed By: andreadb Subscribers: gbedwell, courbet, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D57300 llvm-svn: 352861
* Provide reason messages for unviable inliningYevgeny Rouban2019-02-013-17/+32
| | | | | | | | | | | | | InlineCost's isInlineViable() is changed to return InlineResult instead of bool. This provides messages for failure reasons and allows to get more specific messages for cases where callsites are not viable for inlining. Reviewed By: xbolva00, anemet Differential Revision: https://reviews.llvm.org/D57089 llvm-svn: 352849
* Revert r352750.James Henderson2019-02-011-27/+5
| | | | | | | This was causing a build bot failure: http://green.lab.llvm.org/green/job/clang-stage2-Rthinlto/15346/ llvm-svn: 352848
* [CodeGen] Don't scavenge non-saved regs in exception throwing functionsOliver Stannard2019-02-011-7/+9
| | | | | | | | | | | | | | | | Previously, LiveRegUnits was assuming that if a block has no successors and does not return, then no registers are live at the end of it (because the end of the block is unreachable). This was causing the register scavenger to use callee-saved registers to materialise stack frame addresses without saving them in the prologue. This would normally be fine, because the end of the block is unreachable, but this is not legal if the block ends by throwing a C++ exception. If this happens, the scratch register will be modified, but its previous value won't be preserved, so it doesn't get restored by the exception unwinder. Differential revision: https://reviews.llvm.org/D57381 llvm-svn: 352844
* [SLPVectorizer] Get rid of IndexQueue array from vectorizeStores. NFCI.Yevgeny Rouban2019-02-011-27/+18
| | | | | | | | Indices are checked as they are generated. No need to fill the whole array of indices. Differential Revision: https://reviews.llvm.org/D57144 llvm-svn: 352839
* [RISCV] Implement RV64D codegenAlex Bradbury2019-02-012-4/+30
| | | | | | | | | | | | This patch: * Adds necessary RV64D codegen patterns * Modifies CC_RISCV so it will properly handle f64 types (with soft float ABI) Note that in general there is no reason to try to select fcvt.w[u].d rather than fcvt.l[u].d for i32 conversions because fptosi/fptoui produce poison if the input won't fit into the target type. Differential Revision: https://reviews.llvm.org/D53237 llvm-svn: 352833
* [SelectionDAG] Support promotion of the FPOWI integer operandAlex Bradbury2019-02-012-0/+8
| | | | | | | | | | | | | | For targets where i32 is not a legal type (e.g. 64-bit RISC-V), LegalizeIntegerTypes must promote the integer operand of ISD::FPOWI. As this is a signed value, this should be sign-extended. This patch enables all tests in test/CodeGen/RISCVfloat-intrinsics.ll for RV64, as prior to this patch that file couldn't be compiled for RV64 due to an assertion when performing codegen for fpowi. Differential Revision: https://reviews.llvm.org/D54574 llvm-svn: 352832
* [opaque pointer types] Add a FunctionCallee wrapper type, and use it.James Y Knight2019-02-0143-745/+644
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Recommit r352791 after tweaking DerivedTypes.h slightly, so that gcc doesn't choke on it, hopefully. Original Message: The FunctionCallee type is effectively a {FunctionType*,Value*} pair, and is a useful convenience to enable code to continue passing the result of getOrInsertFunction() through to EmitCall, even once pointer types lose their pointee-type. Then: - update the CallInst/InvokeInst instruction creation functions to take a Callee, - modify getOrInsertFunction to return FunctionCallee, and - update all callers appropriately. One area of particular note is the change to the sanitizer code. Previously, they had been casting the result of `getOrInsertFunction` to a `Function*` via `checkSanitizerInterfaceFunction`, and storing that. That would report an error if someone had already inserted a function declaraction with a mismatching signature. However, in general, LLVM allows for such mismatches, as `getOrInsertFunction` will automatically insert a bitcast if needed. As part of this cleanup, cause the sanitizer code to do the same. (It will call its functions using the expected signature, however they may have been declared.) Finally, in a small number of locations, callers of `getOrInsertFunction` actually were expecting/requiring that a brand new function was being created. In such cases, I've switched them to Function::Create instead. Differential Revision: https://reviews.llvm.org/D57315 llvm-svn: 352827
* [sanitizer-coverage] prune trace-cmp instrumentation for CMP isntructions ↵Kostya Serebryany2019-01-311-2/+34
| | | | | | that feed into the backedge branch. Instrumenting these CMP instructions is almost always useless (and harmful) for fuzzing llvm-svn: 352818
* GlobalISel: Fix MMO creation with non-power-of-2 mem sizeMatt Arsenault2019-01-311-4/+5
| | | | | | | It should probably just be mandatory for getTgtMemIntrinsic to return the alignment. llvm-svn: 352817
* [WebAssembly] Fix a regression selecting negative build_vector lanesThomas Lively2019-01-311-1/+1
| | | | | | | | | | | | | | | | Summary: The custom lowering introduced in rL352592 creates build_vector nodes with negative i32 operands, but these operands did not meet the value range constraints necessary to match build_vector nodes. This CL fixes the issue by removing the unnecessary constraints. Reviewers: aheejin Subscribers: dschuff, sbc100, jgravelle-google, hiraditya, sunfish Differential Revision: https://reviews.llvm.org/D57481 llvm-svn: 352813
* [RISCV] Add RV64F codegen supportAlex Bradbury2019-01-313-2/+130
| | | | | | | | | | | | | This requires a little extra work due tothe fact i32 is not a legal type. When call lowering happens post-legalisation (e.g. when an intrinsic was inserted during legalisation). A bitcast from f32 to i32 can't be introduced. This is similar to the challenges with RV32D. To handle this, we introduce target-specific DAG nodes that perform bitcast+anyext for f32->i64 and trunc+bitcast for i64->f32. Differential Revision: https://reviews.llvm.org/D53235 llvm-svn: 352807
* [WebAssembly] MC: Fix for outputing wasm object to /dev/nullSam Clegg2019-01-311-1/+7
| | | | | | | | Subscribers: dschuff, jgravelle-google, aheejin, sunfish, llvm-commits Differential Revision: https://reviews.llvm.org/D57479 llvm-svn: 352806
* [Hexagon] Rename textually included file from .h to .incRichard Trieu2019-01-312-1/+1
| | | | llvm-svn: 352802
* Revert "[opaque pointer types] Add a FunctionCallee wrapper type, and use it."James Y Knight2019-01-3143-644/+745
| | | | | | | | | This reverts commit f47d6b38c7a61d50db4566b02719de05492dcef1 (r352791). Seems to run into compilation failures with GCC (but not clang, where I tested it). Reverting while I investigate. llvm-svn: 352800
* [EarlyCSE & MSSA] Cleanup special handling for removing MemoryAccesses.Alina Sbirlea2019-01-311-30/+5
| | | | | | | | | | | | Summary: Moving special handling to MemorySSAUpdater in D57199. Reviewers: gberry, george.burgess.iv Subscribers: sanjoy, jlebar, Prazek, llvm-commits Differential Revision: https://reviews.llvm.org/D57200 llvm-svn: 352794
* [WebAssembly] Add bulk memory target featureThomas Lively2019-01-313-16/+30
| | | | | | | | | | | | Summary: Also clean up some preexisting target feature code. Reviewers: aheejin Subscribers: dschuff, sbc100, jgravelle-google, hiraditya, sunfish, jfb Differential Revision: https://reviews.llvm.org/D57495 llvm-svn: 352793
* [DAGCombine] Avoid CombineZExtLogicopShiftLoad if there is free ZEXTGuozhi Wei2019-01-311-0/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch fixes pr39098. For the attached test case, CombineZExtLogicopShiftLoad can optimize it to t25: i64 = Constant<1099511627775> t35: i64 = Constant<0> t0: ch = EntryToken t57: i64,ch = load<(load 4 from `i40* undef`, align 8), zext from i32> t0, undef:i64, undef:i64 t58: i64 = srl t57, Constant:i8<1> t60: i64 = and t58, Constant:i64<524287> t29: ch = store<(store 5 into `i40* undef`, align 8), trunc to i40> t57:1, t60, undef:i64, undef:i64 But later visitANDLike transforms it to t25: i64 = Constant<1099511627775> t35: i64 = Constant<0> t0: ch = EntryToken t57: i64,ch = load<(load 4 from `i40* undef`, align 8), zext from i32> t0, undef:i64, undef:i64 t61: i32 = truncate t57 t63: i32 = srl t61, Constant:i8<1> t64: i32 = and t63, Constant:i32<524287> t65: i64 = zero_extend t64 t58: i64 = srl t57, Constant:i8<1> t60: i64 = and t58, Constant:i64<524287> t29: ch = store<(store 5 into `i40* undef`, align 8), trunc to i40> t57:1, t60, undef:i64, undef:i64 And it triggers CombineZExtLogicopShiftLoad again, causes a dead loop. Both forms should generate same instructions, CombineZExtLogicopShiftLoad generated IR looks cleaner. But it looks more difficult to prevent visitANDLike to do the transform, so I prevent CombineZExtLogicopShiftLoad to do the transform if the ZExt is free. Differential Revision: https://reviews.llvm.org/D57491 llvm-svn: 352792
* [opaque pointer types] Add a FunctionCallee wrapper type, and use it.James Y Knight2019-01-3143-745/+644
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The FunctionCallee type is effectively a {FunctionType*,Value*} pair, and is a useful convenience to enable code to continue passing the result of getOrInsertFunction() through to EmitCall, even once pointer types lose their pointee-type. Then: - update the CallInst/InvokeInst instruction creation functions to take a Callee, - modify getOrInsertFunction to return FunctionCallee, and - update all callers appropriately. One area of particular note is the change to the sanitizer code. Previously, they had been casting the result of `getOrInsertFunction` to a `Function*` via `checkSanitizerInterfaceFunction`, and storing that. That would report an error if someone had already inserted a function declaraction with a mismatching signature. However, in general, LLVM allows for such mismatches, as `getOrInsertFunction` will automatically insert a bitcast if needed. As part of this cleanup, cause the sanitizer code to do the same. (It will call its functions using the expected signature, however they may have been declared.) Finally, in a small number of locations, callers of `getOrInsertFunction` actually were expecting/requiring that a brand new function was being created. In such cases, I've switched them to Function::Create instead. Differential Revision: https://reviews.llvm.org/D57315 llvm-svn: 352791
* [MemorySSA] Extend removeMemoryAccess API to optimize MemoryPhis.Alina Sbirlea2019-01-311-1/+21
| | | | | | | | | | | | | | | | Summary: EarlyCSE needs to optimize MemoryPhis after an access is removed and has special handling for it. This should be handled by MemorySSA instead. The default remains that MemoryPhis are *not* optimized after an access is removed. Reviewers: george.burgess.iv Subscribers: sanjoy, jlebar, llvm-commits, Prazek Differential Revision: https://reviews.llvm.org/D57199 llvm-svn: 352787
* [DAG][SystemZ] Define unwrapAddress for PCREL_WRAPPER.Nirav Dave2019-01-312-0/+8
| | | | | | | | | | | | | | | | Summary: Like with X86, this allows better DAG-level alias analysis and alignment inference for wrapped addresses. Reviewers: jonpa, uweigand Reviewed By: uweigand Subscribers: hiraditya, llvm-commits Differential Revision: https://reviews.llvm.org/D57407 llvm-svn: 352786
* [DAG] Aggressively cleanup dangling node in CombineZExtLogicopShiftLoad.Nirav Dave2019-01-311-0/+4
| | | | | | | | | | | | | While dangling nodes will eventually be pruned when they are considered, leaving them disables combines requiring single-use. Reviewers: Carrot, spatel, craig.topper, RKSimon, efriedma Subscribers: hiraditya, llvm-commits Differential Revision: https://reviews.llvm.org/D57520 llvm-svn: 352784
* [Intrinsic] Expand SMULFIX to MUL, MULH[US], or [US]MUL_LOHI on vector argumentsLeonard Chan2019-01-312-14/+21
| | | | | | | | | | | r zero scale SMULFIX, expand into MUL which produces better code for X86. For vector arguments, expand into MUL if SMULFIX is provided with a zero scale. Otherwise, expand into MULH[US] or [US]MUL_LOHI. Differential Revision: https://reviews.llvm.org/D56987 llvm-svn: 352783
* Revert "[X86] Mark EMMS and FEMMS as clobbering MM0-7 and ST0-7."Craig Topper2019-01-312-6/+2
| | | | | | This is causing a failure in chromium llvm-svn: 352782
* Lower widenable_conditions in CGPPhilip Reames2019-01-311-0/+14
| | | | | | | | This ensures that if we make it to the backend w/o lowering widenable_conditions first, that we generate correct code. Doing it in CGP - instead of isel - let's us fold control flow before hitting block local instruction selection. Differential Revision: https://reviews.llvm.org/D57473 llvm-svn: 352779
* Trim trailing whitespace. NFCI.Simon Pilgrim2019-01-311-1/+1
| | | | llvm-svn: 352775
* [X86][AVX] Fold concat(broadcast(x),broadcast(x)) -> broadcast(x)Simon Pilgrim2019-01-311-6/+5
| | | | | | Differential Revision: https://reviews.llvm.org/D57514 llvm-svn: 352774
* [X86][AVX] insert_subvector(bitcast(v), bitcast(s), c1) -> ↵Simon Pilgrim2019-01-311-0/+36
| | | | | | | | | | bitcast(insert_subvector(v,s,c2)) Similar to what we already do in DAGCombiner, but this version also handles bitcasts from types with different scalar sizes, which x86 is better at handling. Differential Revision: https://reviews.llvm.org/D57514 llvm-svn: 352773
* [CallSite removal] Remove CallSite uses from InstCombine.Craig Topper2019-01-315-101/+110
| | | | | | | | | | | | Reviewers: chandlerc Reviewed By: chandlerc Subscribers: llvm-commits Differential Revision: https://reviews.llvm.org/D57494 llvm-svn: 352771
* Recommit "[ThinLTO] Rename COMDATs for COFF when promoting/renaming COMDAT ↵Teresa Johnson2019-01-311-0/+18
| | | | | | | | leader" Recommit of r352763 with fix for use after free. llvm-svn: 352770
* Revert "[ThinLTO] Rename COMDATs for COFF when promoting/renaming COMDAT leader"Teresa Johnson2019-01-311-17/+0
| | | | | | | | | | | | This reverts commit r352763. Causing a couple bot failures, root cause pointed to by sanitizer bot: http://lab.llvm.org:8011/builders/sanitizer-x86_64-linux-fast/builds/28909/steps/annotate/logs/stdio Use after free. I understand the issue but will revert and test with fix before recommitting. llvm-svn: 352768
* [ThinLTO] Rename COMDATs for COFF when promoting/renaming COMDAT leaderTeresa Johnson2019-01-311-0/+17
| | | | | | | | | | | | | | | | | | Summary: COFF requires that COMDAT name match that of the leader. When we promote and rename an internal leader in ThinLTO due to an import, ensure we subsequently rename the associated COMDAT. Similar to D31963 which did this during ThinLTO module splitting. Fixes PR40414. Reviewers: pcc, inglorion Subscribers: mehdi_amini, dexonsmith, dmajor, llvm-commits Differential Revision: https://reviews.llvm.org/D57395 llvm-svn: 352763
* [X86][AVX] Fold broadcast(bitcast(src)) -> bitcast(broadcast(src))Simon Pilgrim2019-01-311-0/+8
| | | | llvm-svn: 352751
* [CommandLine] Improve help text for cl::values style optionsJames Henderson2019-01-311-5/+27
| | | | | | | | | | | | | | | | | | | | | | | | | | | In order to make an option value truly optional, both the ValueOptional and an empty-named value are required. This empty-named value appears in the command-line help text, which is not ideal. This change improves the help text for these sort of options in a number of ways: 1) ValueOptional options with an empty-named value now print their help text twice: both without and then with '=<value>' after the name. The latter version then lists the allowed values after it. 2) Empty-named values with no help text in ValueOptional options are not listed in the permitted values. 3) Otherwise empty-named options are printed as =<empty> rather than simply '='. 4) Option values without help text do not have the '-' separator printed. It also tweaks the llvm-symbolizer -functions help text to not print a trailing ':' as that looks bad combined with 1) above. Reviewed by: thopre, ruiu Differential Revision: https://reviews.llvm.org/D57030 llvm-svn: 352750
* [X86] combineExtractWithShuffle - more aggressively peek through bitcastsSimon Pilgrim2019-01-311-4/+8
| | | | | | Fixes regression introduced by rL352743 llvm-svn: 352745
* [X86][AVX] Enable AVX1 broadcasts in shuffle combiningSimon Pilgrim2019-01-311-7/+19
| | | | | | | | Enables 32/64-bit scalar load broadcasts on AVX1 targets The extractelement-load.ll regression will be fixed shortly in a followup commit. llvm-svn: 352743
* [X86][AVX] Fold vt1 concat_vectors(vt2 undef, vt2 broadcast(x)) --> vt1 ↵Simon Pilgrim2019-01-311-1/+5
| | | | | | | | | | broadcast(x) If we're not inserting the broadcast into the lowest subvector then we can avoid the insertion by just performing a larger broadcast. Avoids a regression when we enable AVX1 broadcasts in shuffle combining llvm-svn: 352742
* Default lowering for experimental.widenable.conditionMax Kazantsev2019-01-315-0/+90
| | | | | | | | | | | Introduces a pass that provides default lowering strategy for the `experimental.widenable.condition` intrinsic, replacing all its uses with `i1 true`. Differential Revision: https://reviews.llvm.org/D56096 Reviewed By: reames llvm-svn: 352739
OpenPOWER on IntegriCloud