summaryrefslogtreecommitdiffstats
path: root/llvm/lib/Target/AArch64/AArch64LoadStoreOptimizer.cpp
Commit message (Collapse)AuthorAgeFilesLines
* Rename DEBUG macro to LLVM_DEBUG.Nicola Zaghen2018-05-141-39/+40
| | | | | | | | | | | | | | | | The DEBUG() macro is very generic so it might clash with other projects. The renaming was done as follows: - git grep -l 'DEBUG' | xargs sed -i 's/\bDEBUG\s\?(/LLVM_DEBUG(/g' - git diff -U0 master | ../clang/tools/clang-format/clang-format-diff.py -i -p1 -style LLVM - Manual change to APInt - Manually chage DOCS as regex doesn't match it. In the transition period the DEBUG() macro is still present and aliased to the LLVM_DEBUG() one. Differential Revision: https://reviews.llvm.org/D43624 llvm-svn: 332240
* [CodeGen] Use RegUnits to track register aliases (NFC)Jun Bum Lim2018-04-271-43/+52
| | | | | | | | | | | | | | Summary: Use RegUnits to track register aliases in PostRASink and AArch64LoadStoreOptimizer. Reviewers: thegameg, mcrosier, gberry, qcolombet, sebpop, MatzeB, t.p.northover, javed.absar Reviewed By: thegameg, sebpop Subscribers: javed.absar, llvm-commits, kristof.beyls Differential Revision: https://reviews.llvm.org/D45695 llvm-svn: 331066
* [CodeGen] Add a new pass for PostRA sinkJun Bum Lim2018-03-221-36/+9
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: This pass sinks COPY instructions into a successor block, if the COPY is not used in the current block and the COPY is live-in to a single successor (i.e., doesn't require the COPY to be duplicated). This avoids executing the the copy on paths where their results aren't needed. This also exposes additional opportunites for dead copy elimination and shrink wrapping. These copies were either not handled by or are inserted after the MachineSink pass. As an example of the former case, the MachineSink pass cannot sink COPY instructions with allocatable source registers; for AArch64 these type of copy instructions are frequently used to move function parameters (PhyReg) into virtual registers in the entry block.. For the machine IR below, this pass will sink %w19 in the entry into its successor (%bb.1) because %w19 is only live-in in %bb.1. ``` %bb.0: %wzr = SUBSWri %w1, 1 %w19 = COPY %w0 Bcc 11, %bb.2 %bb.1: Live Ins: %w19 BL @fun %w0 = ADDWrr %w0, %w19 RET %w0 %bb.2: %w0 = COPY %wzr RET %w0 ``` As we sink %w19 (CSR in AArch64) into %bb.1, the shrink-wrapping pass will be able to see %bb.0 as a candidate. With this change I observed 12% more shrink-wrapping candidate and 13% more dead copies deleted in spec2000/2006/2017 on AArch64. Reviewers: qcolombet, MatzeB, thegameg, mcrosier, gberry, hfinkel, john.brawn, twoh, RKSimon, sebpop, kparzysz Reviewed By: sebpop Subscribers: evandro, sebpop, sfertile, aemerson, mgorny, javed.absar, kristof.beyls, llvm-commits Differential Revision: https://reviews.llvm.org/D41463 llvm-svn: 328237
* [AArch64] Keep track of MIFlags in the LoadStoreOptimizerFrancis Visoiu Mistrih2018-03-141-7/+14
| | | | | | | | | | | | | | | | Merging: * $x26, $x25 = frame-setup LDPXi $sp, 0 * $sp = frame-destroy ADDXri $sp, 64, 0 into an LDPXpost should preserve the flags from both instructions as following: * frame-setup frame-destroy LDPXpost Differential Revision: https://reviews.llvm.org/D44446 llvm-svn: 327533
* MachineFunction: Return reference from getFunction(); NFCMatthias Braun2017-12-151-1/+1
| | | | | | The Function can never be nullptr so we can return a reference. llvm-svn: 320884
* [CodeGen] Use MachineOperand::print in the MIRPrinter for MO_Register.Francis Visoiu Mistrih2017-12-071-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Work towards the unification of MIR and debug output by refactoring the interfaces. For MachineOperand::print, keep a simple version that can be easily called from `dump()`, and a more complex one which will be called from both the MIRPrinter and MachineInstr::print. Add extra checks inside MachineOperand for detached operands (operands with getParent() == nullptr). https://reviews.llvm.org/D40836 * find . \( -name "*.mir" -o -name "*.cpp" -o -name "*.h" -o -name "*.ll" -o -name "*.s" \) -type f -print0 | xargs -0 sed -i '' -E 's/kill: ([^ ]+) ([^ ]+)<def> ([^ ]+)/kill: \1 def \2 \3/g' * find . \( -name "*.mir" -o -name "*.cpp" -o -name "*.h" -o -name "*.ll" -o -name "*.s" \) -type f -print0 | xargs -0 sed -i '' -E 's/kill: ([^ ]+) ([^ ]+) ([^ ]+)<def>/kill: \1 \2 def \3/g' * find . \( -name "*.mir" -o -name "*.cpp" -o -name "*.h" -o -name "*.ll" -o -name "*.s" \) -type f -print0 | xargs -0 sed -i '' -E 's/kill: def ([^ ]+) ([^ ]+) ([^ ]+)<def>/kill: def \1 \2 def \3/g' * find . \( -name "*.mir" -o -name "*.cpp" -o -name "*.h" -o -name "*.ll" -o -name "*.s" \) -type f -print0 | xargs -0 sed -i '' -E 's/<def>//g' * find . \( -name "*.mir" -o -name "*.cpp" -o -name "*.h" -o -name "*.ll" -o -name "*.s" \) -type f -print0 | xargs -0 sed -i '' -E 's/([^ ]+)<kill>/killed \1/g' * find . \( -name "*.mir" -o -name "*.cpp" -o -name "*.h" -o -name "*.ll" -o -name "*.s" \) -type f -print0 | xargs -0 sed -i '' -E 's/([^ ]+)<imp-use,kill>/implicit killed \1/g' * find . \( -name "*.mir" -o -name "*.cpp" -o -name "*.h" -o -name "*.ll" -o -name "*.s" \) -type f -print0 | xargs -0 sed -i '' -E 's/([^ ]+)<dead>/dead \1/g' * find . \( -name "*.mir" -o -name "*.cpp" -o -name "*.h" -o -name "*.ll" -o -name "*.s" \) -type f -print0 | xargs -0 sed -i '' -E 's/([^ ]+)<def[ ]*,[ ]*dead>/dead \1/g' * find . \( -name "*.mir" -o -name "*.cpp" -o -name "*.h" -o -name "*.ll" -o -name "*.s" \) -type f -print0 | xargs -0 sed -i '' -E 's/([^ ]+)<imp-def[ ]*,[ ]*dead>/implicit-def dead \1/g' * find . \( -name "*.mir" -o -name "*.cpp" -o -name "*.h" -o -name "*.ll" -o -name "*.s" \) -type f -print0 | xargs -0 sed -i '' -E 's/([^ ]+)<imp-def>/implicit-def \1/g' * find . \( -name "*.mir" -o -name "*.cpp" -o -name "*.h" -o -name "*.ll" -o -name "*.s" \) -type f -print0 | xargs -0 sed -i '' -E 's/([^ ]+)<imp-use>/implicit \1/g' * find . \( -name "*.mir" -o -name "*.cpp" -o -name "*.h" -o -name "*.ll" -o -name "*.s" \) -type f -print0 | xargs -0 sed -i '' -E 's/([^ ]+)<internal>/internal \1/g' * find . \( -name "*.mir" -o -name "*.cpp" -o -name "*.h" -o -name "*.ll" -o -name "*.s" \) -type f -print0 | xargs -0 sed -i '' -E 's/([^ ]+)<undef>/undef \1/g' llvm-svn: 320022
* [CodeGen] Print register names in lowercase in both MIR and debug outputFrancis Visoiu Mistrih2017-11-281-2/+2
| | | | | | | | | | | As part of the unification of the debug format and the MIR format, always print registers as lowercase. * Only debug printing is affected. It now follows MIR. Differential Revision: https://reviews.llvm.org/D40417 llvm-svn: 319187
* Fix a bunch more layering of CodeGen headers that are in TargetDavid Blaikie2017-11-171-1/+1
| | | | | | | | All these headers already depend on CodeGen headers so moving them into CodeGen fixes the layering (since CodeGen depends on Target, not the other way around). llvm-svn: 318490
* [AArch64] Refactor the loads and stores optimizerEvandro Menezes2017-11-151-143/+143
| | | | | | | | | Move remaining inline matching of instructions of some optimizations into separate functions, like in the other optimizations. Otherwise, NFC. Differential revision: https://reviews.llvm.org/D40090 llvm-svn: 318335
* [AArch64] Fix an assertion for pre-index generation with unscaled loads/stores.Chad Rosier2017-08-041-2/+17
| | | | | | | Differential Revision: https://reviews.llvm.org/D36248 PR34035 llvm-svn: 310066
* [AArch64] Fix some Clang-tidy modernize-use-using and Include What You Use ↵Eugene Zelenko2017-07-251-5/+5
| | | | | | warnings; other minor fixes (NFC). llvm-svn: 309062
* AArch64: remove all kill flags when extending register liveness.Tim Northover2017-06-261-1/+7
| | | | | | | | | | | | When we forward a stored value to a load and eliminate it entirely we need to make sure the liveness of the register is maintained all the way to its use. Previously we only cleared liveness on the store doing the forwarding, but there could be other killing uses in between. We already do the right thing when the load has to be converted into something else, it was just this one path that skipped it. llvm-svn: 306318
* [AArch64] Add early exit to promoteLoadFromStore.Florian Hahn2017-06-211-1/+4
| | | | | | | | There should be at most a single kill flag for the promoted operand between the store/load pair. Discussed in https://reviews.llvm.org/D34402. llvm-svn: 305889
* [AArch64] Preserve register flags when promoting a load from store.Florian Hahn2017-06-211-3/+4
| | | | | | | | | | | | | | | | | | | | | Summary: This patch updates promoteLoadFromStore to use the store MachineOperand as the source operand of the of the new instruction instead of creating a new register MachineOperand. This way, the existing register flags are preserved. This fixes PR33468 (https://bugs.llvm.org/show_bug.cgi?id=33468). Reviewers: MatzeB, t.p.northover, junbuml Reviewed By: MatzeB Subscribers: aemerson, rengolin, javed.absar, kristof.beyls, llvm-commits Differential Revision: https://reviews.llvm.org/D34402 llvm-svn: 305885
* Sort the remaining #include lines in include/... and lib/....Chandler Carruth2017-06-061-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | I did this a long time ago with a janky python script, but now clang-format has built-in support for this. I fed clang-format every line with a #include and let it re-sort things according to the precise LLVM rules for include ordering baked into clang-format these days. I've reverted a number of files where the results of sorting includes isn't healthy. Either places where we have legacy code relying on particular include ordering (where possible, I'll fix these separately) or where we have particular formatting around #include lines that I didn't want to disturb in this patch. This patch is *entirely* mechanical. If you get merge conflicts or anything, just ignore the changes in this patch and run clang-format over your #include lines in the files. Sorry for any noise here, but it is important to keep these things stable. I was seeing an increasing number of patches with irrelevant re-ordering of #include lines because clang-format was used. This patch at least isolates that churn, makes it easy to skip when resolving conflicts, and gets us to a clean baseline (again). llvm-svn: 304787
* [AArch64] Use alias analysis in the load/store optimization pass.Chad Rosier2017-03-171-7/+14
| | | | | | | | This allows the optimization to rearrange loads and stores more aggressively. Differential Revision: http://reviews.llvm.org/D30903 llvm-svn: 298092
* AArch64LoadStoreOptimizer: Correctly clear kill flagsMatthias Braun2017-02-171-2/+4
| | | | | | | | | | | When promoting the Load of a Store-Load pair to a COPY all kill flags between the store and the load need to be cleared. rdar://30402435 Differential Revision: https://reviews.llvm.org/D30110 llvm-svn: 295512
* [AArch64] Fix some Clang-tidy modernize and Include What You Use warnings; ↵Eugene Zelenko2017-01-251-9/+21
| | | | | | other minor fixes (NFC). llvm-svn: 292996
* AArch64LoadStoreOptimizer: Update kill flags when merging storesMatthias Braun2017-01-201-2/+23
| | | | | | | | | | | | | | Kill flags need to be updated correctly when moving stores up/down to form store pair instructions. Those invalid flags have been ignored before but as of r290014 they are recognized when using -mllvm -verify-machineinstrs. Also simplifies test/CodeGen/AArch64/ldst-opt-dbg-limit.mir, renames it to ldst-opt.mir test and adds a new tests for this change. Differential Revision: https://reviews.llvm.org/D28875 llvm-svn: 292625
* [CodeGen] Rename MachineInstrBuilder::addOperand. NFCDiana Picus2017-01-131-11/+11
| | | | | | | | | | | Rename from addOperand to just add, to match the other method that has been added to MachineInstrBuilder for adding more than just 1 operand. See https://reviews.llvm.org/D28057 for the whole discussion. Differential Revision: https://reviews.llvm.org/D28556 llvm-svn: 291891
* [AArch64] Fix over-eager early-exit in load-store combinerNirav Dave2017-01-041-0/+3
| | | | | | | | | | | | | Fix early-exit analysis for memory operation pairing when operations are not emitted in ascending order. Reviewers: mcrosier, t.p.northover Subscribers: aemerson, rengolin, llvm-commits Differential Revision: https://reviews.llvm.org/D28251 llvm-svn: 291008
* AArch64: Enable post-ra liveness updatesMatthias Braun2016-12-161-0/+3
| | | | | | Differential Revision: https://reviews.llvm.org/D27559 llvm-svn: 290014
* [AArch64LoadStoreOptimizer] Don't treat write to XZR/WZR as a clobber.Geoff Berry2016-11-211-2/+4
| | | | | | | | | | | | | | | Summary: When searching for load/store instructions to pair/merge don't treat writes to WZR/XZR as clobbers since they don't change the value read from WZR/XZR (which is always 0). Reviewers: mcrosier, junbuml, jmolloy, t.p.northover Subscribers: aemerson, llvm-commits, rengolin Differential Revision: https://reviews.llvm.org/D26921 llvm-svn: 287592
* [AArch64] Update a FIXME comment to reflect current state. NFC.Chad Rosier2016-11-111-2/+4
| | | | llvm-svn: 286625
* [AArch64] Enable merging of adjacent zero stores for all subtargets.Chad Rosier2016-11-111-2/+1
| | | | | | | | | | | | | | | | | | | | | | | | This optimization merges adjacent zero stores into a wider store. e.g., strh wzr, [x0] strh wzr, [x0, #2] ; becomes str wzr, [x0] e.g., str wzr, [x0] str wzr, [x0, #4] ; becomes str xzr, [x0] Previously, this was only enabled for Kryo and Cortex-A57. Differential Revision: https://reviews.llvm.org/D26396 llvm-svn: 286592
* [AArch64] Remove dead store. Found by gcc7.Davide Italiano2016-11-071-6/+3
| | | | llvm-svn: 286137
* [AArch64] Removed the narrow load merging code in the ld/st optimizer.Chad Rosier2016-11-071-229/+42
| | | | | | | | This feature has been disabled for some time now, so remove cruft. Differential Revision: https://reviews.llvm.org/D26248 llvm-svn: 286110
* Use StringRef in Pass/PassManager APIs (NFC)Mehdi Amini2016-10-011-3/+1
| | | | llvm-svn: 283004
* MachineFunctionProperties/MIRParser: Rename AllVRegsAllocated->NoVRegs, ↵Matthias Braun2016-08-251-1/+1
| | | | | | | | | | | | | compute it Rename AllVRegsAllocated to NoVRegs. This avoids the connotation of running after register and simply describes that no vregs are used in a machine function. With that we can simply compute the property and do not need to dump/parse it in .mir files. Differential Revision: http://reviews.llvm.org/D23850 llvm-svn: 279698
* [AArch64LoadStoreOptimizer] Check aliasing correctly when creating paired ↵Eli Friedman2016-08-121-1/+4
| | | | | | | | | | loads/stores. The existing code accidentally skipped the aliasing check in edge cases. Differential revision: https://reviews.llvm.org/D23372 llvm-svn: 278562
* [AArch64LoadStoreOpt] Handle offsets correctly for post-indexed paired loads.Eli Friedman2016-08-121-5/+5
| | | | | | | | Trunk would try to create something like "stp x9, x8, [x0], #512", which isn't actually a valid instruction. Differential revision: https://reviews.llvm.org/D23368 llvm-svn: 278559
* [AArch64] Re-factor code shared by AArch64LoadStoreOpt and AArch64InstrInfo.Geoff Berry2016-08-121-37/+3
| | | | | | | | | | | | | | | | | | | | | | | | | This re-factoring could cause the following slight changes in generated code, though none were observed during testing: - MachineScheduler could decide not to cluster some loads/stores if there are other load/stores with non-pairable opcodes that have the same base register and offset as a pairable set of load/stores. One case of different MachineScheduler pairing did show up in my testing, but it wasn't due to this issue, but due BaseMemOpClusterMutation::clusterNeighboringMemOps() being unstable w.r.t. the order it considers memory operations. See PR28942. - The ImplicitNullChecks optimization could be done for more load/store opcodes. This optimization isn't done for C/C++ code, so it didn't show up in my testing. Reviewers: mcrosier, t.p.northover Subscribers: aemerson, rengolin, mcrosier, llvm-commits Differential Revision: https://reviews.llvm.org/D23365 llvm-svn: 278515
* [AArch64] Load/store opt: Don't count transient instructions towards search ↵Geoff Berry2016-07-211-15/+14
| | | | | | | | | | | | | | | | | | limits. Summary: This change also changes findMatchingInsn and findMatchingUpdateInsnForward to take DBG_VALUE opcodes into account when tracking register defs and uses, which could potentially inhibit these optimizations in the presence of debug information. Reviewers: mcrosier Subscribers: aemerson, rengolin, mcrosier, llvm-commits Differential Revision: https://reviews.llvm.org/D22582 llvm-svn: 276293
* [AArch64] Register AArch64LoadStoreOptimizer so it can be run by llc ↵Geoff Berry2016-07-201-4/+0
| | | | | | -run-pass. NFCI. llvm-svn: 276193
* AArch64: Avoid implicit iterator conversions, NFCDuncan P. N. Exon Smith2016-07-081-155/+155
| | | | | | | | Avoid implicit conversions from MachineInstrBundleInstr to MachineInstr* in the AArch64 backend, mainly by preferring MachineInstr& over MachineInstr* when a pointer isn't nullable. llvm-svn: 274924
* CodeGen: Use MachineInstr& in TargetInstrInfo, NFCDuncan P. N. Exon Smith2016-06-301-21/+21
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This is mostly a mechanical change to make TargetInstrInfo API take MachineInstr& (instead of MachineInstr* or MachineBasicBlock::iterator) when the argument is expected to be a valid MachineInstr. This is a general API improvement. Although it would be possible to do this one function at a time, that would demand a quadratic amount of churn since many of these functions call each other. Instead I've done everything as a block and just updated what was necessary. This is mostly mechanical fixes: adding and removing `*` and `&` operators. The only non-mechanical change is to split ARMBaseInstrInfo::getOperandLatencyImpl out from ARMBaseInstrInfo::getOperandLatency. Previously, the latter took a `MachineInstr*` which it updated to the instruction bundle leader; now, the latter calls the former either with the same `MachineInstr&` or the bundle leader. As a side effect, this removes a bunch of MachineInstr* to MachineBasicBlock::iterator implicit conversions, a necessary step toward fixing PR26753. Note: I updated WebAssembly, Lanai, and AVR (despite being off-by-default) since it turned out to be easy. I couldn't run tests for AVR since llc doesn't link with it turned on. llvm-svn: 274189
* Untabify.NAKAMURA Takumi2016-06-201-1/+1
| | | | llvm-svn: 273129
* [AArch64] Move comments closer to relevant check. NFC.Chad Rosier2016-06-101-6/+4
| | | | llvm-svn: 272430
* [AArch64] Refactor a check earlier. NFC.Chad Rosier2016-06-101-12/+18
| | | | llvm-svn: 272429
* AArch64: Do not test for CPUs, use SubtargetFeaturesMatthias Braun2016-06-021-14/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | Testing for specific CPUs has a number of problems, better use subtarget features: - When some tweak is added for a specific CPU it is often desirable for the next version of that CPU as well, yet we often forget to add it. - It is hard to keep track of checks scattered around the target code; Declaring all target specifics together with the CPU in the tablegen file is a clear representation. - Subtarget features can be tweaked from the command line. To discourage people from using CPU checks in the future I removed the isCortexXX(), isCyclone(), ... functions. I added an getProcFamily() function for exceptional circumstances but made it clear in the comment that usage is discouraged. Reformat feature list in AArch64.td to have 1 feature per line in alphabetical order to simplify merging and sorting for out of tree tweaks. No functional change intended. Differential Revision: http://reviews.llvm.org/D20762 llvm-svn: 271555
* [AArch64] Disable narrow load merge by defaultJun Bum Lim2016-05-201-1/+1
| | | | | | | | | | | | | | Summary: As this optimization converts two loads into one load with two shift instructions, it could potentially hurt performance if a loop is arithmetic operation intensive. Reviewers: t.p.northover, mcrosier, jmolloy Subscribers: evandro, jmolloy, aemerson, rengolin, mcrosier, llvm-commits Differential Revision: http://reviews.llvm.org/D20172 llvm-svn: 270251
* [AArch64] Decouple zero store promotion from narrow ld merge. NFC.Jun Bum Lim2016-05-061-28/+16
| | | | | | | | | | | | Summary: This change refactors to decouple the zero store promotion from the narrow ld merge and add a flag (enable-narrow-ld-merge=true) to control the narrow ld merge optimization. Reviewers: jmolloy, t.p.northover, mcrosier Subscribers: aemerson, rengolin, mcrosier, llvm-commits Differential Revision: http://reviews.llvm.org/D19885 llvm-svn: 268744
* Add optimization bisect opt-in calls for AArch64 passesAndrew Kaylor2016-04-251-0/+3
| | | | | | Differential Revision: http://reviews.llvm.org/D19394 llvm-svn: 267479
* Add MachineFunctionProperty checks for AllVRegsAllocated for target passesDerek Schuff2016-04-041-0/+5
| | | | | | | | | | | | | | Summary: This adds the same checks that were added in r264593 to all target-specific passes that run after register allocation. Reviewers: qcolombet Subscribers: jyknight, dsanders, llvm-commits Differential Revision: http://reviews.llvm.org/D18525 llvm-svn: 265313
* [AArch64] Handle missing store pair opportunityJun Bum Lim2016-03-311-22/+23
| | | | | | | | | | | | | | | | | | | | Summary: This change will handle missing store pair opportunity where the first store instruction stores zero followed by the non-zero store. For example, this change will convert : str wzr, [x8] str w1, [x8, #4] into: stp wzr, w1, [x8] Reviewers: jmolloy, t.p.northover, mcrosier Subscribers: flyingforyou, aemerson, rengolin, mcrosier, llvm-commits Differential Revision: http://reviews.llvm.org/D18570 llvm-svn: 265021
* [AArch64] Fix warnings pointed out by Hal.Chad Rosier2016-03-301-1/+5
| | | | llvm-svn: 264882
* [AArch64] Enable more load clustering in the MI Scheduler.Chad Rosier2016-03-181-29/+2
| | | | | | | | | | | | | This patch adds unscaled loads and sign-extend loads to the TII getMemOpBaseRegImmOfs API, which is used to control clustering in the MI scheduler. This is done to create more opportunities for load pairing. I've also added the scaled LDRSWui instruction, which was missing from the scaled instructions. Finally, I've added support in shouldClusterLoads for clustering adjacent sext and zext loads that too can be paired by the load/store optimizer. Differential Revision: http://reviews.llvm.org/D18048 llvm-svn: 263819
* [AArch64] Move helper functions into TII, so they can be reused elsewhere. NFC.Chad Rosier2016-03-091-47/+21
| | | | llvm-svn: 263032
* [AArch64] Add MMOs to unscaled pairs.Chad Rosier2016-03-081-3/+2
| | | | | | | Test to be committed in follow up commit, per discussion in D17097. http://reviews.llvm.org/D17097 llvm-svn: 262942
* [AArch64] Add support for Qualcomm Kryo CPU.Chad Rosier2016-02-121-1/+1
| | | | | | Machine model description by Dave Estes <cestes@codeaurora.org>. llvm-svn: 260686
OpenPOWER on IntegriCloud