summaryrefslogtreecommitdiffstats
path: root/llvm/test/CodeGen/ARM
Commit message (Collapse)AuthorAgeFilesLines
* [ARM] Don't pessimize i32 vselect.Charlie Turner2015-11-171-5/+6
| | | | | | | | | | | | | | | | | | The underlying issues surrounding codegen for 32-bit vselects have been resolved. The pessimistic costs for 64-bit vselects remain due to the bad scalarization that is still happening there. I tested this on A57 in T32, A32 and A64 modes. I saw no regressions, and some improvements. From my benchmarks, I saw these improvements in A57 (T32) spec.cpu2000.ref.177_mesa 5.95% lnt.SingleSource/Benchmarks/Shootout/strcat 12.93% lnt.MultiSource/Benchmarks/MiBench/telecomm-CRC32/telecomm-CRC32 11.89% I also measured A57 A32, A53 T32 and A9 T32 and found no performance regressions. I see much bigger wins in third-party benchmarks with this change Differential Revision: http://reviews.llvm.org/D14743 llvm-svn: 253349
* Use TargetRegisterInfo for printing MachineOperand register commentsDan Gohman2015-11-173-4/+4
| | | | | | | | | | | | | | | | Several places in AsmPrinter.cpp print comments describing MachineOperand registers using MCRegisterInfo, which uses MCOperand-oriented names. This doesn't work for targets that use virtual registers exclusively, as WebAssembly does, since virtual registers are represented and printed differently. This patch preserves what seems to be the spirit of r229978, avoiding the use of TM.getSubtargetImpl(), while still using MachineOperand-oriented printing for MachineOperands. Differential Revision: http://reviews.llvm.org/D14709 llvm-svn: 253338
* [ARM] Match VABDL from log2 shuffles.Charlie Turner2015-11-171-0/+38
| | | | | | Differential Revision: http://reviews.llvm.org/D14664 llvm-svn: 253334
* Drop prelink support.Rafael Espindola2015-11-171-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The way prelink used to work was * The compiler decides if a given section only has relocations that are know to point to the same DSO. If so, it names it .data.rel.ro.local<something>. * The static linker puts all of these together. * The prelinker program assigns addresses to each library and resolves the local relocations. There are many problems with this: * It is incompatible with address space randomization. * The information passed by the compiler is redundant. The linker knows if a given relocation is in the same DSO or not. If could sort by that if so desired. * There are newer ways of speeding up DSO (gnu hash for example). * Even if we want to implement this again in the compiler, the previous implementation is pretty broken. It talks about relocations that are "resolved by the static linker". If they are resolved, there are none left for the prelinker. What one needs to track is if an expression will require only dynamic relocations that point to the same DSO. At this point it looks like the prelinker is an historical curiosity. For example, fedora has retired it because it failed to build for two releases (http://pkgs.fedoraproject.org/cgit/prelink.git/commit/?id=eb43100a8331d91c801ee3dcdb0a0bb9babfdc1f) This patch removes support for it. That is, it stops printing the ".local" sections. llvm-svn: 253280
* Properly check if a CMPZ node is in fact comparing against zeroJames Molloy2015-11-161-0/+11
| | | | | | | | This was left implicit and never ever checked, which means we could have a CMPZ against some non-zero value and we were carrying on with BFI conversion regardless. Caught by Oliver Stannard using csmith; regression test added. llvm-svn: 253195
* [ARM] Replace ARMISD::RBIT with ISD::BITREVERSEJames Molloy2015-11-131-0/+11
| | | | | | ISD::BITREVERSE matches "rbit" completely, so remove ARMISD::RBIT and mark ISD::BITREVERSE as legal, adding a test for lowering. llvm-svn: 253047
* [ARM] CMOV->BFI combining: handle both senses of CMPZJames Molloy2015-11-121-0/+11
| | | | | | | | I completely misunderstood what ARMISD::CMPZ means. It's not "compare equal to zero", it's "compare, only setting the zero/Z flag". It can either be equal-to-zero or not-equal-to-zero, and we weren't checking what sense it was. If it's equal-to-zero, we can swap the operands around and pretend like it is not-equal-to-zero, which is both a bug fix and lets us handle more cases. llvm-svn: 252891
* Revert "[ARM] Enable shrink-wrapping by default."Renato Golin2015-11-127-16/+10
| | | | | | This reverts commit r252825, as it broke ASAN on ARM. Investigating... llvm-svn: 252889
* [ARM] Enable shrink-wrapping by default.Quentin Colombet2015-11-117-10/+16
| | | | | | | | Differential Revision: http://reviews.llvm.org/D14357 rdar://problem/21942589 llvm-svn: 252825
* [ARM] Combine BFIs togetherJames Molloy2015-11-111-0/+39
| | | | | | If we have a chain of BFIs, we may be able to combine several together into one merged BFI. We can do this if the "from" bits from one BFI OR'd with the "from" bits from the other BFI form a contiguous range, and the same with the "to" bits. llvm-svn: 252740
* Reapply "[ARM] Combine CMOV into BFI where possible"James Molloy2015-11-101-0/+34
| | | | | | | | | | | | | | | | | | Added fixes for stage2 failures: CMOV is not commutable; commuting the operands results in the condition being flipped! d'oh! Original commit message: If we have a CMOV, OR and AND combination such as: if (x & CN) y |= CM; And: * CN is a single bit; * All bits covered by CM are known zero in y; Then we can convert this to a sequence of BFI instructions. This will always be a win if CM is a single bit, will always be no worse than the TST & OR sequence if CM is two bits, and for thumb will be no worse if CM is three bits (due to the extra IT instruction). llvm-svn: 252606
* [EABI] Add LLVM support for -meabi flagRenato Golin2015-11-092-104/+214
| | | | | | | | | | | | | | | | | | | | | "GCC requires the freestanding environment provide memcpy, memmove, memset and memcmp": https://gcc.gnu.org/onlinedocs/gcc-5.2.0/gcc/Standards.html Hence in GNUEABI targets LLVM should not convert 'memops' to their equivalent '__aeabi_memops'. This convertion violates GCC contract. The -meabi flag controls whether or not LLVM will modify 'memops' in GNUEABI targets. Without -meabi: use the triple default EABI. With -meabi=default: use the triple default EABI. With -meabi=gnu: use 'memops'. With -meabi=4 or -meabi=5: use '__aeabi_memops'. With -meabi set to an unknown value: same as -meabi=default. Patch by Vinicius Tinti. llvm-svn: 252462
* Revert "[ARM] Combine CMOV into BFI where possible"Renato Golin2015-11-091-23/+0
| | | | | | | This reverts commit r252057, as it broke ARM self-hosting buildbots, probably due to a code-gen fault. llvm-svn: 252460
* [CodeGen] Always promote f16 if not legalOliver Stannard2015-11-091-164/+150
| | | | | | | | | | | | | | | | | | | We don't currently have any runtime library functions for operations on f16 values (other than conversions to and from f32 and f64), so we should always promote it to f32, even if that is not a legal type. In that case, the f32 values would be softened to f32 library calls. SoftenFloatRes_FP_EXTEND now needs to check the promoted operand's type, as it may ne a no-op or require a different library call. getCopyFromParts and getCopyToParts now need to cope with a floating-point value stored in a larger integer part, as is the case for any target that needs to store an f16 value in a 32-bit integer register. Differential Revision: http://reviews.llvm.org/D12856 llvm-svn: 252459
* DI: Reverse direction of subprogram -> function edge.Peter Collingbourne2015-11-0517-63/+63
| | | | | | | | | | | | | | | | | | | | | | | Previously, subprograms contained a metadata reference to the function they described. Because most clients need to get or set a subprogram for a given function rather than the other way around, this created unneeded inefficiency. For example, many passes needed to call the function llvm::makeSubprogramMap() to build a mapping from functions to subprograms, and the IR linker needed to fix up function references in a way that caused quadratic complexity in the IR linking phase of LTO. This change reverses the direction of the edge by storing the subprogram as function-level metadata and removing DISubprogram's function field. Since this is an IR change, a bitcode upgrade has been provided. Fixes PR23367. An upgrade script for textual IR for out-of-tree clients is attached to the PR. Differential Revision: http://reviews.llvm.org/D14265 llvm-svn: 252219
* [ARM] Combine CMOV into BFI where possibleJames Molloy2015-11-041-0/+23
| | | | | | | | | | | | | | If we have a CMOV, OR and AND combination such as: if (x & CN) y |= CM; And: * CN is a single bit; * All bits covered by CM are known zero in y; Then we can convert this to a sequence of BFI instructions. This will always be a win if CM is a single bit, will always be no worse than the TST & OR sequence if CM is two bits, and for thumb will be no worse if CM is three bits (due to the extra IT instruction). llvm-svn: 252057
* ARM: add extra test for watchOS ABITim Northover2015-10-301-0/+153
| | | | llvm-svn: 251705
* ARM: add support for WatchOS's compact unwind information.Tim Northover2015-10-284-3/+67
| | | | llvm-svn: 251573
* ARM: teach backend about WatchOS and TvOS libcalls.Tim Northover2015-10-282-0/+170
| | | | | | | The most substantial changes are again for watchOS: libcalls are hard-float if needed and sincos has a different calling convention. llvm-svn: 251571
* ARM: add backend support for the ABI used in WatchOSTim Northover2015-10-281-0/+146
| | | | | | | At the LLVM level this ABI is essentially a minimal modification of AAPCS to support 16-byte alignment for vector types and the stack. llvm-svn: 251570
* [ARM] Expand ROTL and ROTR of vector value typesCharlie Turner2015-10-271-0/+14
| | | | | | | | | | | | Summary: After D13851 landed, we saw backend crashes when compiling the reduced test case included in this patch. The right fix seems to be to allow these vector types for expansion in instruction selection. Reviewers: rengolin, t.p.northover Subscribers: RKSimon, t.p.northover, aemerson, llvm-commits, rengolin Differential Revision: http://reviews.llvm.org/D14082 llvm-svn: 251401
* ARM: make sure VFP loads and stores are properly aligned.Tim Northover2015-10-261-0/+98
| | | | | | | Both VLDRS and VLDRD fault if the memory is not 4 byte aligned, which wasn't really being checked before, leading to faults at runtime. llvm-svn: 251352
* Fix tests.Peter Collingbourne2015-10-261-1/+1
| | | | llvm-svn: 251343
* ARM/ELF: Better codegen for global variable addresses.Peter Collingbourne2015-10-264-66/+35
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In PIC mode we were previously computing global variable addresses (or GOT entry addresses) by adding the PC, the PC-relative GOT displacement and the GOT-relative symbol/GOT entry displacement. Because the latter two displacements are fixed, we ended up performing one more addition than necessary. This change causes us to compute addresses using a single PC-relative displacement, resulting in a shorter code sequence. This reduces code size by about 4% in a recent build of Chromium for Android. As a result of this change we no longer need to compute the GOT base address in the ARM backend, which allows us to remove the Global Base Reg pass and SDAG lowering for the GOT. We also now no longer use the GOT when addressing a symbol which is known to be defined in the same linkage unit. Specifically, the symbol must have either hidden visibility or a strong definition in the current module in order to not use the the GOT. This is a change from the previous behaviour where we would use the GOT to address externally visible symbols defined in the same module. I think the only cases where this could matter are cases involving symbol interposition, but we don't really support that well anyway. Differential Revision: http://reviews.llvm.org/D13650 llvm-svn: 251322
* [ARM] Handle the inline asm constraint type 'o'James Molloy2015-10-261-0/+11
| | | | | | This means "memory with offset" and requires very little plumbing to get working. This fixes PR25317. llvm-svn: 251280
* [ARM CodeGen] @llvm.debugtrap call may be removed when restoring callee ↵Oleg Ranevskyy2015-10-231-0/+17
| | | | | | | | | | | | | | | | | saved registers Summary: When ARMFrameLowering::emitPopInst generates a "pop" instruction to restore the callee saved registers, it checks if the LR register is among them. If so, the function may decide to remove the basic block's terminator and replace it with a "pop" to the PC register instead of LR. This leads to a problem when the block's terminator is preceded by a "llvm.debugtrap" call. The MI iterator points to the trap in such a case, which is also a terminator. If the function decides to restore LR to PC, it erroneously removes the trap. Reviewers: asl, rengolin Subscribers: aemerson, jfb, rengolin, dschuff, llvm-commits Differential Revision: http://reviews.llvm.org/D13672 llvm-svn: 251123
* Fix incorrect target triple in fp16-promote.llPirama Arumuga Nainar2015-10-221-96/+99
| | | | | | | | | | | | | | | | | Summary: Hyphens were missing from the triple, causing it to be parsed incorrectly. This patch updates the triple and makes necessary changes to the expected output. Patch is from Vinicius Tinti. Reviewers: ab, tinti Subscribers: srhines, llvm-commits Differential Revision: http://reviews.llvm.org/D13792 llvm-svn: 251020
* Add missing load/store flags to thumb2 instructions.Pete Cooper2015-10-221-1/+1
| | | | | | | | | | | | | | | | | | These were the cause of a verifier error when building 7zip with -verify-machineinstrs. Running 'make check' with the verifier triggered the same error on the test here so i've updated the test to run the verifier on one of its runs instead of adding a new one. While looking at this code, there was a stale comment that these instructions were only used for disassembly. This probably used to be the case, but they are now used in the 'ARM load / store optimization pass' too. This reapplies r242300 which was reverted in r242428 due to bot failures. Ultimately those failures were spurious and completely unrelated to this commit. I reverted this at the time because it was thought to be at fault. llvm-svn: 250969
* Adding support for TargetLoweringBase::LibCallArtyom Skrobov2015-10-201-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: TargetLoweringBase::Expand is defined as "Try to expand this to other ops, otherwise use a libcall." For ISD::UDIV and ISD::SDIV, the choice between the two possibilities was defined in a rather convoluted way: - if DIVREM is legal, expand to DIVREM - if DIVREM has a custom lowering, expand to DIVREM - if DIVREM libcall is defined and a remainder from the same division is computed elsewhere, expand to a DIVREM libcall - else, expand to a DIV libcall This had the undesirable effect that if both DIV and DIVREM are implemented as libcalls, then ISD::UDIV and ISD::SDIV are expanded to the heavier DIVREM libcall, even when the remainder isn't used. The new code adds a new LegalizeAction, TargetLoweringBase::LibCall, so that backends can directly control whether they prefer an expansion or a conversion to a libcall. This makes the generic lowering code even more generic, allowing its reuse in a wider range of target-specific configurations. The useful effect is that ARM backend will now generate a call to __aeabi_{i,u}div rather than __aeabi_{i,u}divmod in cases where it doesn't need the remainder. There's no functional change outside the ARM backend. Reviewers: t.p.northover, rengolin Subscribers: t.p.northover, llvm-commits, aemerson Differential Revision: http://reviews.llvm.org/D13862 llvm-svn: 250826
* Fix mapping of @llvm.arm.ssat/usat intrinsics to ssat/usat instructionsAsiri Rathnayake2015-10-195-0/+100
| | | | | | | | | | | | | The mapping of these two intrinsics in ARMInstrInfo.td had a small omission which lead to their operands not being validated/transformed before being lowered into usat and ssat instructions. This can cause incorrect instructions to be emitted. I've also added tests for the remaining two saturating arithmatic intrinsics @llvm.arm.qadd and @llvm.arm.qsub as they are missing codegen tests. llvm-svn: 250697
* [ARM] Make sure we do not dereference the end iterator when accessing debugQuentin Colombet2015-10-151-0/+57
| | | | | | | | | | information. Although the problem was always here, it would only be exposed when shrink-wrapping is enable. rdar://problem/23110493 llvm-svn: 250352
* ARM: tweak WoA frame loweringSaleem Abdulrasool2015-10-091-0/+22
| | | | | | | | | | Accept r11 when targeting Windows on ARM rather than just low registers. Because we are in a thumb-2 only mode, this may be slightly more expensive in code size, but results in better code for the environment since it spills the frame register, which is generally desired for fast stack walking as per the ABI. llvm-svn: 249804
* [ARM] Promote helper function to SelectionDAG.Chad Rosier2015-10-071-0/+9
| | | | | | | | | I'll be using the function in a similar combine for AArch64. The helper was also improved to handle undef values. Part of http://reviews.llvm.org/D13442 llvm-svn: 249572
* [ARM] Use correct half-precision functions in EABI modeOliver Stannard2015-10-071-26/+36
| | | | | | | | | The ARM RTABI defines the half- to single-precision float conversion functions with an __aeabi prefix, but libgcc only has them with a __gnu prefix. Therefore we need to emit the __aeabi version when compiling with an eabi or eabihf triple, and the __gnu version with a gnueabi or gnueabihf triple. llvm-svn: 249565
* [ARM] Prevent PerformVDIVCombine from combining a vcvt/vdiv with 8 lanes.Chad Rosier2015-10-071-0/+8
| | | | | | This would result in a crash since the vcvt used does not support v8i32 types. llvm-svn: 249560
* [ARM][AArch64] Only lower to interleaved load/store if the target has NEONJeroen Ketema2015-10-071-48/+89
| | | | | | | | | Without an additional check for NEON, the compiler crashes during legalization of NEON ldN/stN. Differential Revision: http://reviews.llvm.org/D13508 llvm-svn: 249550
* [ARM] Simplify tests and make checks more rigid. NFC.Chad Rosier2015-10-061-67/+36
| | | | llvm-svn: 249432
* [ARM] Modify codegen for memcpy intrinsic to prefer LDM/STM.Scott Douglass2015-10-053-2/+189
| | | | | | | | | | | | | | | | | | | | | | | | | | | We were previously codegen'ing memcpy as regular load/store operations and hoping that the register allocator would allocate registers in ascending order so that we could apply an LDM/STM combine after register allocation. According to the commit that first introduced this code (r37179), we planned to teach the register allocator to allocate the registers in ascending order. This never got implemented, and up to now we've been stuck with very poor codegen. A much simpler approach for achieving better codegen is to create MEMCPY pseudo instructions, attach scratch virtual registers to them and then, post register allocation, expand the MEMCPYs into LDM/STM pairs using the scratch registers. The register allocator will have picked arbitrary registers which we sort when expanding the MEMCPY. This approach also avoids the need to repeatedly calculate offsets which ultimately ought to be eliminated pre-RA in order to decrease register pressure. Fixes PR9199 and PR23768. [This is based on Peter Collingbourne's r238473 which was reverted.] Differential Revision: http://reviews.llvm.org/D13239 Change-Id: I727543c2e94136e0f80b8e22d5642d7b9ee5b458 Author: Peter Collingbourne <peter@pcc.me.uk> llvm-svn: 249322
* [ARM] More care with Thumb1 writeback in ARMLoadStoreOptimizerScott Douglass2015-10-011-0/+27
| | | | | | Differential Revision: http://reviews.llvm.org/D13240 llvm-svn: 249002
* [ARM][NEON] Use address space in vld([1234]|[234]lane) and ↵Jeroen Ketema2015-09-3033-408/+568
| | | | | | | | | | | | | | | | | | | | | vst([1234]|[234]lane) instructions This commit changes the interface of the vld[1234], vld[234]lane, and vst[1234], vst[234]lane ARM neon intrinsics and associates an address space with the pointer that these intrinsics take. This changes, e.g., <2 x i32> @llvm.arm.neon.vld1.v2i32(i8*, i32) to <2 x i32> @llvm.arm.neon.vld1.v2i32.p0i8(i8*, i32) This change ensures that address spaces are fully taken into account in the ARM target during lowering of interleaved loads and stores. Differential Revision: http://reviews.llvm.org/D12985 llvm-svn: 248887
* [ARM] Don't generate clrex for pre-v7 targets.Ahmed Bougacha2015-09-261-1/+32
| | | | | | Since r248294, we emit clrex, but it doesn't exist on v6. llvm-svn: 248640
* ARM: address WoA division limitationSaleem Abdulrasool2015-09-252-36/+49
| | | | | | | | | | | | | We now emit the compiler generated divide by zero check that was needed for the MSVC routines. We construct a psuedo-instruction for the DBZ check as the operation requires splitting up the BB. For the 64-bit operations, we need to custom expand the node as we need to insert the DBZ check and then emit the libcall to the appropriate name. Because this is target specific, it seemed better to reproduce the expansion operation from the target-agnostic type legalization rather than sink this there to avoid the duplication. The division library calls now match MSVC semantically. llvm-svn: 248561
* Introduce target hook for optimizing register copiesMatt Arsenault2015-09-244-76/+98
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Allow a target to do something other than search for copies that will avoid cross register bank copies. Implement for SI by only rewriting the most basic copies, so it should look through anything like a subregister extract. I'm not entirely satisified with this because it seems like eliminating a reg_sequence that isn't fully used should work generically for all targets without them having to override something. However, it seems to be tricky to have a simple implementation of this without rewriting to invalid kinds of subregister copies on some targets. I'm not sure if there is currently a generic way to easily check if a subregister index would be valid for the current use. The current set of TargetRegisterInfo::get*Class functions don't quite behave like I would expect (e.g. getSubClassWithSubReg returns the maximal register class rather than the minimal), so I'm not sure how to make the generic test keep searching if SrcRC:SrcSubReg is a valid replacement for DefRC:DefSubReg. Making the default implementation to check for simple copies breaks a variety of ARM and x86 tests by producing illegal subregister uses. The ARM tests are not actually changed since it should still be using the same sharesSameRegisterFile implementation, this just relaxes them to not check for specific registers. llvm-svn: 248478
* ARM: fix folding stack adjustment (again again again...)Tim Northover2015-09-231-1/+2
| | | | | | | | | | | | This time, the issue is that we weren't accounting for the possibility that aligned DPRs could have been stored after the final "push" in a prologue. When that happened we effectively moved a "sub sp, #N" from below the aligned stores to above them, and everything went to pot. To make it worse, I'd actually committed something testing that we produced wrong code, so the test update is tiny. llvm-svn: 248437
* [ARM] Emit clrex in the expanded cmpxchg fail block.Ahmed Bougacha2015-09-225-69/+138
| | | | | | | | | | | | | | | | | | | ARM counterpart to r248291: In the comparison failure block of a cmpxchg expansion, the initial ldrex/ldxr will not be followed by a matching strex/stxr. On ARM/AArch64, this unnecessarily ties up the execution monitor, which might have a negative performance impact on some uarchs. Instead, release the monitor in the failure block. The clrex instruction was designed for this: use it. Also see ARMARM v8-A B2.10.2: "Exclusive access instructions and Shareable memory locations". Differential Revision: http://reviews.llvm.org/D13033 llvm-svn: 248294
* [ARM] Do not scale vext with a factorJeroen Ketema2015-09-211-0/+11
| | | | | | | | | | | | | The vext pseudo-instruction takes the number of elements that need to be extracted, not the number of bytes. Hence, use the number of elements directly instead of scaling them with a factor. Reviewers: Silviu Baranga, James Molloy (not reflected in the differential revision) Differential Revision: http://reviews.llvm.org/D12974 llvm-svn: 248208
* Update edge weights properly when merging blocks in if-conversion.Cong Hou2015-09-181-0/+6
| | | | | | | | In if-conversion, there is a utility function MergeBlocks() that is used to merge blocks. However, when new edges are built in this function the edge weight is either not provided or not updated properly, leading to a modified CFG with incorrect edge weights. This patch corrects this issue. Differential Revision: http://reviews.llvm.org/D12513 llvm-svn: 248030
* Limit the range of processors supported by ARM fast isel to v6 orEric Christopher2015-09-182-36/+0
| | | | | | | | later as that's all that is tested right now. Fixes PR24858. llvm-svn: 248027
* Scaling up values in ARMBaseInstrInfo::isProfitableToIfCvt() before they are ↵Cong Hou2015-09-184-9/+9
| | | | | | | | | | scaled by a probability to avoid precision issue. In ARMBaseInstrInfo::isProfitableToIfCvt(), there is a simple cost model in which the number of cycles is scaled by a probability to estimate the cost. However, when the number of cycles is small (which is usually the case), there is a precision issue after the computation. To avoid this issue, this patch scales those cycles by 1024 (chosen to make the multiplication a litter faster) before they are scaled by the probability. Other variables are also scaled up for the final comparison. Differential Revision: http://reviews.llvm.org/D12742 llvm-svn: 248018
* [ShrinkWrap] Refactor the handling of infinite loop in the analysis.Quentin Colombet2015-09-171-0/+62
| | | | | | | | | - Strenghten the logic to be sure we hoist the restore point out of the current loop. (The fixes a bug with infinite loop, added as part of the patch.) - Walk over the exit blocks of the current loop to conver to the desired restore point in one iteration of the update loop. llvm-svn: 247958
OpenPOWER on IntegriCloud