summaryrefslogtreecommitdiffstats
path: root/llvm/lib/Target/ARM
Commit message (Collapse)AuthorAgeFilesLines
...
* [ARM] Attempt to use whole register vmovs for MVE shuffles.David Green2019-12-081-0/+90
| | | | | | | | | | | | | | MVE doesn't have the range of shuffle instructions available in Neon. We also cannot use the trick of cutting a difficult vector shuffle in half to simplify things. Instead we need to be more careful about how we lower shuffles. This patch adds an extra combine that attempts to find "whole lane" vmovs when lowering shuffles of smaller types. This helps us make some shuffles a lot simpler, generating single lane movs for the parts that can make use of it, falling back to the original shuffle for the rest. Differential Revision: https://reviews.llvm.org/D69509
* [ARM] Disable VLD4 under MVEDavid Green2019-12-081-1/+6
| | | | | | | | | | Alas, using half the available vector registers in a single instruction is just too much for the register allocator to handle. The mve-vldst4.ll test here fails when these instructions are enabled at present. This patch disables the generation of VLD4 and VST4 by adding a mve-max-interleave-factor option, which we currently default to 2. Differential Revision: https://reviews.llvm.org/D71109
* Revert "ARM-Darwin: keep the frame register reserved even if not updated."Alina Sbirlea2019-12-061-1/+1
| | | | | | | | This reverts commit a7d90af1be48234ce583e00fb16e33633d44ae38. This revision came back as the root-cause for crashes in internal ARM-IOS apps. Reproducer in https://bugs.llvm.org/show_bug.cgi?id=44231.
* [ARM][MVE] Fix copy-paste error in VQSHL instruction ids.Simon Tatham2019-12-061-6/+6
| | | | | | | | | | | | | | | | | | | | Summary: The immediate forms of the MVE VQSHL instruction have MC names like `MVE_VSLIimms8` and `MVE_VSLIimmu32`. Those names are confusing, because VSLI is a completely different shift instruction with no semantic relation to VQSHL. But it just happens to be defined immediately before VQSHL in `ARMInstrMVE.td`, so this looks like a copy-paste error. Renamed the ids to match the instruction name. Reviewers: ostannard, dmgreen, MarkMurrayARM, miyuki Reviewed By: miyuki Subscribers: kristof.beyls, hiraditya, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D71114
* [ARM][MVE][Intrinsics] Add VMULH/VRMULH intrinsics.Mark Murray2019-12-041-14/+40
| | | | | | | | | | | | Summary: Add MVE VMULH/VRMULH intrinsics and unit tests. Reviewers: simon_tatham, ostannard, dmgreen Subscribers: kristof.beyls, hiraditya, cfe-commits, llvm-commits Tags: #clang, #llvm Differential Revision: https://reviews.llvm.org/D70948
* [CodeGen] Move ARMCodegenPrepare to TypePromotionSam Parker2019-12-034-1074/+1
| | | | | | | | | | | | | | | | | | Convert ARMCodeGenPrepare into a generic type promotion pass by: - Removing the insertion of arm specific intrinsics to handle narrow types as we weren't using this. - Removing ARMSubtarget references. - Now query a generic TLI object to know which types should be promoted and what they should be promoted to. - Move all codegen tests into Transforms folder and testing using opt and not llc, which is how they should have been written in the first place... The pass searches up from icmp operands in an attempt to safely promote types so we can avoid generating unnecessary unsigned extends during DAG ISel. Differential Revision: https://reviews.llvm.org/D69556
* [ARM] Add ARMVCCThen to tablegen and make use of it. NFCDavid Green2019-12-022-71/+76
| | | | | | | Similar to the parent, this adds some constants to tablegen to replace the existing magic values. Differential Revision: https://reviews.llvm.org/D70825
* [ARM] Add ARMCC constants to tablegen. NFCDavid Green2019-12-023-151/+133
| | | | | | | | | | I got tired of looking at magic constants in tablegen files. This adds condition codes like ARMCCeq and makes use of them. I also removed the extra patterns for reverse condition codes from D70296, they should now be covered by the parent commit. Differential Revision: https://reviews.llvm.org/D70824
* [ARM] Add some VCMP folding and canonicalisationDavid Green2019-12-023-27/+62
| | | | | | | | | | The VCMP instructions in MVE can accept a register or ZR, but only as the right hand operator. Most of the time this will already be correct because the icmp will have been canonicalised that way already. There are some cases in the lowering of float conditions that this will not apply to though. This code should fix up those cases. Differential Revision: https://reviews.llvm.org/D70822
* [ARM,MVE] Add intrinsics to deal with predicates.Simon Tatham2019-12-021-12/+14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: This commit adds the `vpselq` intrinsics which take an MVE predicate word and select lanes from two vectors; the `vctp` intrinsics which create a tail predicate word suitable for processing the first m elements of a vector (e.g. in the last iteration of a loop); and `vpnot`, which simply complements a predicate word and is just syntactic sugar for the `~` operator. The `vctp` ACLE intrinsics are lowered to the IR intrinsics we've already added (and which D70592 just reorganized). I've filled in the missing isel rule for VCTP64, and added another set of rules to generate the predicated forms. I needed one small tweak in MveEmitter to allow the `unpromoted` type modifier to apply to predicates as well as integers, so that `vpnot` doesn't pointlessly convert its input integer to an `<n x i1>` before complementing it. Reviewers: ostannard, MarkMurrayARM, dmgreen Reviewed By: dmgreen Subscribers: kristof.beyls, hiraditya, cfe-commits, llvm-commits Tags: #clang, #llvm Differential Revision: https://reviews.llvm.org/D70485
* [ARM,MVE] Rename and clean up VCTP IR intrinsics.Simon Tatham2019-12-022-7/+12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: D65884 added a set of Arm IR intrinsics for the MVE VCTP instruction, to use in tail predication. But the 64-bit one doesn't work properly: its predicate type is `<2 x i1>` / `v2i1`, which isn't a legal MVE type (due to not having a full set of instructions that manipulate it usefully). The test of `vctp64` in `basic-tail-pred.ll` goes through `opt` fine, as the test expects, but if you then feed it to `llc` it causes a type legality failure at isel time. The usual workaround we've been using in the rest of the MVE intrinsics family is to bodge `v2i1` into `v4i1`. So I've adjusted the `vctp64` IR intrinsic to do that, and completely removed the code (and test) that uses that intrinsic for 64-bit tail predication. That will allow me to add isel rules (upcoming in D70485) that actually generate the VCTP64 instruction. Also renamed all four of these IR intrinsics so that they have `mve` in the name, since its absence was confusing. Reviewers: ostannard, MarkMurrayARM, dmgreen Reviewed By: MarkMurrayARM Subscribers: samparker, kristof.beyls, hiraditya, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D70592
* [ARM][AArch64] Complex addition Neon intrinsics for Armv8.3-AVictor Campos2019-12-021-0/+21
| | | | | | | | | | | | | | | | | | | Summary: Add support for vcadd_* family of intrinsics. This set of intrinsics is available in Armv8.3-A. The fp16 versions require the FP16 extension, which has been available (opt-in) since Armv8.2-A. Reviewers: t.p.northover Reviewed By: t.p.northover Subscribers: t.p.northover, kristof.beyls, hiraditya, cfe-commits, llvm-commits Tags: #clang, #llvm Differential Revision: https://reviews.llvm.org/D70862
* [ARM][MVE][Intrinsics] Add VMINQ/VMAXQ/VMINNMQ/VMAXNMQ intrinsics.Mark Murray2019-12-021-38/+55
| | | | | | | | | | Summary: Add VMINQ/VMAXQ/VMINNMQ/VMAXNMQ intrinsics and their predicated versions. Add unit tests. Subscribers: kristof.beyls, hiraditya, dmgreen, cfe-commits, llvm-commits Tags: #clang, #llvm Differential Revision: https://reviews.llvm.org/D70829
* [ARM] Remove VHADD patternsDavid Green2019-12-021-54/+0
| | | | | | | | | | | | | | | These instructions do not work quite like I expected them to. They perform the addition and then shift in a higher precision integer, so do not match up with the patterns that we added. For example with s8s, adding 100 and 100 should wrap leaving the shift to work on a negative number. VHADD will instead do the arithmetic in higher precision, giving 100 overall. The vhadd gives a "better" result, but not one that matches up with the input. I am just removing the patterns here. We might be able to re-add them in the future by checking for wrap flags or changing bitwidths. But for the moment just remove them to remove the problem cases.
* Revert "[ARM] Allocatable Global Register Variables for ARM"Carey Williams2019-11-298-68/+22
| | | | This reverts commit 2d739f98d8a53e38bf9faa88cdb6b0c2a363fb77.
* [ARM] Fix instruction selection for ARMISD::CMOV with f16 typeVictor Campos2019-11-292-1/+8
| | | | | | | | | | | | | | | | | | | | | | Summary: In the cases where the CMOV (f16) SDNode is used with condition codes LT, LE, VC or NE, it is successfully selected into a VSEL instruction. In the remaining cases, however, instruction selection fails since VSEL does not support other condition codes. This patch handles such cases by using the single-precision version of the VMOV instruction. Reviewers: ostannard, dmgreen Reviewed By: dmgreen Subscribers: kristof.beyls, hiraditya, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D70667
* [ARM][MVE][Intrinsics] Add MVE VAND/VORR/VORN/VEOR/VBIC intrinsics. Add unit ↵Mark Murray2019-11-271-45/+53
| | | | | | | | | | | | | | tests. Summary: Add MVE VAND/VORR/VORN/VEOR/VBIC intrinsics. Add unit tests. Reviewers: simon_tatham, ostannard, dmgreen Subscribers: kristof.beyls, hiraditya, cfe-commits, llvm-commits Tags: #clang, #llvm Differential Revision: https://reviews.llvm.org/D70547
* [ARM][MVE][Intrinsics] Add MVE VMUL intrinsics. Remove annoying "t1" from ↵Mark Murray2019-11-271-21/+49
| | | | | | | | | | | | | | VMUL* instructions. Add unit tests. Summary: Add MVE VMUL intrinsics. Remove annoying "t1" from VMUL* instructions. Add unit tests. Reviewers: simon_tatham, ostannard, dmgreen Subscribers: kristof.beyls, hiraditya, cfe-commits, llvm-commits Tags: #clang, #llvm Differential Revision: https://reviews.llvm.org/D70546
* [ARM][MVE][Intrinsics] Add MVE VABD intrinsics. Add unit tests.Mark Murray2019-11-271-9/+53
| | | | | | | | | | | | Summary: Add MVE VABD intrinsics. Add unit tests. Reviewers: simon_tatham, ostannard, dmgreen Subscribers: kristof.beyls, hiraditya, cfe-commits, llvm-commits Tags: #clang, #llvm Differential Revision: https://reviews.llvm.org/D70545
* [ARM] Replace arm_neon_vqadds with sadd_satDavid Green2019-11-272-28/+31
| | | | | | | | | | This replaces the A32 NEON vqadds, vqaddu, vqsubs and vqsubu intrinsics with the target independent sadd_sat, uadd_sat, ssub_sat and usub_sat. This helps generate vqadds from standard IR nodes, which might be produced from the vectoriser. The old variants are removed in the process. Differential Revision: https://reviews.llvm.org/D69350
* [ARM] Clean up the load and store code. NFCDavid Green2019-11-261-263/+246
| | | | | | | | | Some of these patterns have grown quite organically. I've tried to organise them a little here, moving all the PatFlags together and giving them a more consistent naming scheme, to allow some of the later patterns to be merged into a single multiclass. Differential Revision: https://reviews.llvm.org/D70178
* [Codegen][ARM] Add addressing modes from masked loads and storesDavid Green2019-11-263-62/+248
| | | | | | | | | | | | | | | | | | | | | | | | | | | | MVE has a basic symmetry between it's normal loads/store operations and the masked variants. This means that masked loads and stores can use pre-inc and post-inc addressing modes, just like the standard loads and stores already do. To enable that, this patch adds all the relevant infrastructure for treating masked loads/stores addressing modes in the same way as normal loads/stores. This involves: - Adding an AddressingMode to MaskedLoadStoreSDNode, along with an extra Offset operand that is added after the PtrBase. - Extending the IndexedModeActions from 8bits to 16bits to store the legality of masked operations as well as normal ones. This array is fairly small, so doubling the size still won't make it very large. Offset masked loads can then be controlled with setIndexedMaskedLoadAction, similar to standard loads. - The same methods that combine to indexed loads, such as CombineToPostIndexedLoadStore, are adjusted to handle masked loads in the same way. - The ARM backend is then adjusted to make use of these indexed masked loads/stores. - The X86 backend is adjusted to hopefully be no functional changes. Differential Revision: https://reviews.llvm.org/D70176
* [ARM][ReachingDefs] Remove dead code in loloops.Sam Parker2019-11-261-49/+140
| | | | | | | | | | | | Add some more helper functions to ReachingDefs to query the uses of a given MachineInstr and also to query whether two MachineInstrs use the same def of a register. For Arm, while tail-predicating, these helpers are used in the low-overhead loops to remove the dead code that calculates the number of loop iterations. Differential Revision: https://reviews.llvm.org/D70240
* [ARM][ReachingDefs] RDA in LoLoopsSam Parker2019-11-261-93/+39
| | | | | | | | | | | | | | | | | Add several new methods to ReachingDefAnalysis: - getReachingMIDef, instead of returning an integer, return the MachineInstr that produces the def. - getInstFromId, return a MachineInstr for which the given integer corresponds to. - hasSameReachingDef, return whether two MachineInstr use the same def of a register. - isRegUsedAfter, return whether a register is used after a given MachineInstr. These methods have been used in ARMLowOverhead to replace searching for uses/defs. Differential Revision: https://reviews.llvm.org/D70009
* [ARM][ConstantIslands] Correct block size updateSam Parker2019-11-261-10/+10
| | | | | | | | | When inserting a non-decrementing LE, the basic block was being resized to take into consideration that a tCMP and tBcc had been combined into one T1 instruction. This is not true in the LE case where we generate a CBN?Z and an LE. Differential Revision: https://reviews.llvm.org/D70536
* [ARM] Generate CMSE instructions from CMSE intrinsicsMomchil Velikov2019-11-251-4/+12
| | | | | | | | | | | This patch adds instruction selection patterns for the TT, TTT, TTA, and TTAT instructions and tests for llvm.arm.cmse.tt, llvm.arm.cmse.ttt, llvm.arm.cmse.tta, and llvm.arm.cmse.ttat intrinsics (added in a previous patch). Patch by Javed Absar. Differential Revision: https://reviews.llvm.org/D70407
* [ARM][MVE] Select vqnegAnna Welker2019-11-251-11/+19
| | | | | | | | Adds a pattern to ARMInstrMVE.td to use a VQNEG instruction if an equivalent multi-instruction construct is found. Differential Revision: https://reviews.llvm.org/D70491
* [cmake] Explicitly mark libraries defined in lib/ as "Component Libraries"Tom Stellard2019-11-215-5/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: Most libraries are defined in the lib/ directory but there are also a few libraries defined in tools/ e.g. libLLVM, libLTO. I'm defining "Component Libraries" as libraries defined in lib/ that may be included in libLLVM.so. Explicitly marking the libraries in lib/ as component libraries allows us to remove some fragile checks that attempt to differentiate between lib/ libraries and tools/ libraires: 1. In tools/llvm-shlib, because llvm_map_components_to_libnames(LIB_NAMES "all") returned a list of all libraries defined in the whole project, there was custom code needed to filter out libraries defined in tools/, none of which should be included in libLLVM.so. This code assumed that any library defined as static was from lib/ and everything else should be excluded. With this change, llvm_map_components_to_libnames(LIB_NAMES, "all") only returns libraries that have been added to the LLVM_COMPONENT_LIBS global cmake property, so this custom filtering logic can be removed. Doing this also fixes the build with BUILD_SHARED_LIBS=ON and LLVM_BUILD_LLVM_DYLIB=ON. 2. There was some code in llvm_add_library that assumed that libraries defined in lib/ would not have LLVM_LINK_COMPONENTS or ARG_LINK_COMPONENTS set. This is only true because libraries defined lib lib/ use LLVMBuild.txt and don't set these values. This code has been fixed now to check if the library has been explicitly marked as a component library, which should now make it easier to remove LLVMBuild at some point in the future. I have tested this patch on Windows, MacOS and Linux with release builds and the following combinations of CMake options: - "" (No options) - -DLLVM_BUILD_LLVM_DYLIB=ON - -DLLVM_LINK_LLVM_DYLIB=ON - -DBUILD_SHARED_LIBS=ON - -DBUILD_SHARED_LIBS=ON -DLLVM_BUILD_LLVM_DYLIB=ON - -DBUILD_SHARED_LIBS=ON -DLLVM_LINK_LLVM_DYLIB=ON Reviewers: beanz, smeenai, compnerd, phosek Reviewed By: beanz Subscribers: wuzish, jholewinski, arsenm, dschuff, jyknight, dylanmckay, sdardis, nemanjai, jvesely, nhaehnle, mgorny, mehdi_amini, sbc100, jgravelle-google, hiraditya, aheejin, fedor.sergeev, asb, rbar, johnrusso, simoncook, apazos, sabuasal, niosHD, jrtc27, MaskRay, zzheng, edward-jones, atanasyan, steven_wu, rogfer01, MartinMosbeck, brucehoult, the_o, dexonsmith, PkmX, jocewei, jsji, dang, Jim, lenary, s.egerton, pzheng, sameer.abuasal, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D70179
* [ARM][MVE] Select vqabsAnna Welker2019-11-201-0/+35
| | | | | | | | Adds a pattern to ARMInstrMVE.td to use a VQABS instruction if an equivalent multi-instruction construct is found. Differential revision: https://reviews.llvm.org/D70181
* [ARM] MVE interleaving load and stores.David Green2019-11-193-41/+107
| | | | | | | | | | | | | | | | | | | | Now that we have the intrinsics, we can add VLD2/4 and VST2/4 lowering for MVE. This works the same way as Neon, recognising the load/shuffles combination and converting them into intrinsics in a pre-isel pass, which just calls getMaxSupportedInterleaveFactor, lowerInterleavedLoad and lowerInterleavedStore. The main difference to Neon is that we do not have a VLD3 instruction. Otherwise most of the code works very similarly, with just some minor differences in the form of the intrinsics to work around. VLD3 is disabled by making isLegalInterleavedAccessType return false for those cases. We may need some other future adjustments, such as VLD4 take up half the available registers so should maybe cost more. This patch should get the basics in though. Differential Revision: https://reviews.llvm.org/D69392
* [ARM,MVE] Add intrinsics for scalar shifts.Simon Tatham2019-11-192-8/+44
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This fills in the small family of MVE intrinsics that have nothing to do with vectors: they implement bit-shift operations on 32- or 64-bit values held in one or two general-purpose registers. Most of these shift operations saturate if shifting left, and round to nearest if shifting right, although LSLL and ASRL behave like ordinary shifts. When these instructions take a variable shift count in a register, they pay attention to its sign, so that (for example) LSLL or UQRSHLL will shift left if given a positive number but right if given a negative one. That makes even LSLL and ASRL different enough from standard LLVM IR shift semantics that I couldn't see any better alternative than to simply model the whole family as a set of MVE-specific IR intrinsics. (The //immediate// forms of LSLL and ASRL, on the other hand, do behave exactly like a standard IR shift of a 64-bit value. In fact, those forms don't have ACLE intrinsics defined at all, because you can just write an ordinary C shift operation if you want one of those.) The 64-bit shifts have to be instruction-selected in C++, because they deliver two output values. But the 32-bit ones are simple enough that I could write a DAG isel pattern directly into each Instruction record. Reviewers: ostannard, MarkMurrayARM, dmgreen Reviewed By: dmgreen Subscribers: kristof.beyls, hiraditya, cfe-commits, llvm-commits Tags: #clang, #llvm Differential Revision: https://reviews.llvm.org/D70319
* DAG: Add function context to isFMAFasterThanFMulAndFAddMatt Arsenault2019-11-192-2/+4
| | | | | | | | AMDGPU needs to know the FP mode for the function to answer this correctly when this is removed from the subtarget. AArch64 had to make this more complicated by using this from an IR hook, so add an IR typed overload.
* [ARM][MVE] Enable narrow vectors for tail predSam Parker2019-11-191-1/+1
| | | | | | | | Remove the restriction, from the mve tail predication pass, that the all masked vectors instructions need to be 128-bits. This allows us to supported extending loads and truncating stores. Differential Revision: https://reviews.llvm.org/D69946
* [ARM][MVE] Tail predication conversionSam Parker2019-11-192-134/+295
| | | | | | | | | | | | | | | This patch modifies ARMLowOverheadLoops to convert a predicated vector low-overhead loop into a tail-predicatd one. This is currently a very basic conversion, with the following restrictions: - Operates only on single block loops. - The loop can only contain a single vctp instruction. - No other instructions can write to the vpr. - We only allow a subset of the mve instructions in the loop. TODO: Pass the number of elements, not the number of iterations to dlstp/wlstp. Differential Revision: https://reviews.llvm.org/D69945
* [SVE][CodeGen] Scalable vector MVT size queriesGraham Hunter2019-11-181-1/+1
| | | | | | | | | | | | | | | | | | | * Implements scalable size queries for MVTs, split out from D53137. * Contains a fix for FindMemType to avoid using scalable vector type to contain non-scalable types. * Explicit casts for several places where implicit integer sign changes or promotion from 32 to 64 bits caused problems. * CodeGenDAGPatterns will treat scalable and non-scalable vector types as different. Reviewers: greened, cameron.mcinally, sdesmalen, rovka Reviewed By: rovka Differential Revision: https://reviews.llvm.org/D66871
* [ARM] Allocatable Global Register Variables for ARMAnna Welker2019-11-188-22/+68
| | | | | | | | | | | | Provides support for using r6-r11 as globally scoped register variables. This requires a -ffixed-rN flag in order to reserve rN against general allocation. If for a given GRV declaration the corresponding flag is not found, or the the register in question is the target's FP, we fail with a diagnostic. Differential Revision: https://reviews.llvm.org/D68862
* MCObjectStreamer: assign MCSymbols in the dummy fragment to offset 0.James Y Knight2019-11-161-2/+1
| | | | | | | | | | | | | | | | | In MCObjectStreamer, when there is no current fragment, initially symbols are created in a "pending" state and assigned to a dummy empty fragment. Previously, they were not being assigned an offset, and thus evaluateAbsolute would fail if trying to evaluate an expression 'a - b', where both 'a' and 'b' were in this pending state. Also slightly refactored the EmitLabel overload which takes an MCFragment for clarity. Fixes: https://llvm.org/PR41825 Differential Revision: https://reviews.llvm.org/D70062
* [ARM,MVE] Add reversed isel patterns for MVE `vcmp qN,rN`Simon Tatham2019-11-151-13/+48
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: As well as vector/vector compare instructions, MVE also has a family of comparisons taking a vector and a scalar, which compare every lane of the vector against the same value. We generate those at isel time using isel patterns that match `(ARMvcmp vector, (ARMvdup scalar))`. This commit adds corresponding patterns for the operand-reversed form `(ARMvcmp (ARMvdup scalar), vector)`, with condition codes swapped as necessary. That way, we can still generate the vector/scalar compare instruction if the IR happens to have been rearranged to put the operands the other way round, which can happen in some optimization phases. Previously, a vcmp the other way round was handled by emitting a `vdup` instruction to //explicitly// replicate the scalar input into a vector, and then doing a vector/vector comparison. I haven't added a new test, because it turned out that several existing tests were already exhibiting that failure mode. So just updating the expected output in the existing MVE codegen tests demonstrates what's been improved. Reviewers: ostannard, MarkMurrayARM, dmgreen Reviewed By: dmgreen Subscribers: kristof.beyls, hiraditya, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D70296
* [ARM][MVE] tail-predicationSjoerd Meijer2019-11-151-0/+3
| | | | | | | This is a follow up of d90804d, to also flag fmcp instructions as instructions that we do not support in tail-predicated vector loops. Differential Revision: https://reviews.llvm.org/D70295
* ARM: allow rewriting frame indexes for all prefetch variants.Tim Northover2019-11-141-0/+14
| | | | | For some reason we could handle PLD but not PLDW or PLI, but all of them can potentially refer to the stack region (if weirdly for PLI).
* Fix uninitialized variable warnings. NFCI.Simon Pilgrim2019-11-142-6/+5
|
* [NFC] Fix typo in ARMBaseRegisterInfoAnna Welker2019-11-141-1/+1
|
* [LiveInterval] Allow updating subranges with slightly out-dated IRQuentin Colombet2019-11-132-0/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | During register coalescing, we update the live-intervals on-the-fly. To do that we are in this strange mode where the live-intervals can be slightly out-of-sync (more precisely they are forward looking) compared to what the IR actually represents. This happens because the register coalescer only updates the IR when it is done with updating the live-intervals and it has to do it this way because updating the IR on-the-fly would actually clobber some information on how the live-ranges that are being updated look like. This is problematic for updates that rely on the IR to accurately represents the state of the live-ranges. Right now, we have only one of those: stripValuesNotDefiningMask. To reconcile this need of out-of-sync IR, this patch introduces a new argument to LiveInterval::refineSubRanges that allows the code doing the live range updates to reason about how the code should look like after the coalescer will have rewritten the registers. Essentially this captures how a subregister index with be offseted to match its position in a new register class. E.g., let say we want to merge: V1.sub1:<2 x s32> = COPY V2.sub3:<4 x s32> We do that by choosing a class where sub1:<2 x s32> and sub3:<4 x s32> overlap, i.e., by choosing a class where we can find "offset + 1 == 3". Put differently we align V2's sub3 with V1's sub1: V2: sub0 sub1 sub2 sub3 V1: <offset> sub0 sub1 This offset will look like a composed subregidx in the the class: V1.(composed sub2 with sub1):<4 x s32> = COPY V2.sub3:<4 x s32> => V1.(composed sub2 with sub1):<4 x s32> = COPY V2.sub3:<4 x s32> Now if we didn't rewrite the uses and def of V1, all the checks for V1 need to account for this offset to match what the live intervals intend to capture. Prior to this patch, we would fail to recognize the uses and def of V1 and would end up with machine verifier errors: No live segment at def. This could lead to miscompile as we would drop some live-ranges and thus, miss some interferences. For this problem to trigger, we need to reach stripValuesNotDefiningMask while having a mismatch between the IR and the live-ranges (i.e., we have to apply a subreg offset to the IR.) This requires the following three conditions: 1. An update of overlapping subreg lanes: e.g., dsub0 == <ssub0, ssub1> 2. An update with Tuple registers with a possibility to coalesce the subreg index: e.g., v1.dsub_1 == v2.dsub_3 3. Subreg liveness enabled. looking at the IR to decide what is alive and what is not, i.e., calling stripValuesNotDefiningMask. coalescer maintains for the live-ranges information. None of the targets that currently use subreg liveness (i.e., the targets that fulfill #3, Hexagon, AMDGPU, PowerPC, and SystemZ IIRC) expose #1 and and #2, so this patch also artificial enables subreg liveness for ARM, so that a nice test case can be attached.
* Fix comment spelling {addresing -> addressing} (NFC)Matthew Malcomson2019-11-131-1/+1
|
* [ARM][MVE] canTailPredicateLoopSjoerd Meijer2019-11-132-9/+100
| | | | | | | | | | | | | | | | | | This implements TTI hook 'preferPredicateOverEpilogue' for MVE. This is a first version and it operates on single block loops only. With this change, the vectoriser will now determine if tail-folding scalar remainder loops is possible/desired, which is the first step to generate MVE tail-predicated vector loops. This is disabled by default for now. I.e,, this is depends on option -disable-mve-tail-predication, which is off by default. I will follow up on this soon with a patch for the vectoriser to respect loop hint 'vectorize.predicate.enable'. I.e., with this loop hint set to Disabled, we don't want to tail-fold and we shouldn't query this TTI hook, which is done in D70125. Differential Revision: https://reviews.llvm.org/D69845
* [ARM,MVE] Use VMOV.{S8,S16} for sign-extended extractelement.Simon Tatham2019-11-131-3/+4
| | | | | | | | | | | | | | | | | | | | | | MVE includes instructions that extract an 8- or 16-bit lane from a vector and sign-extend it into the output 32-bit GPR. `ARMInstrMVE.td` already included isel patterns to select those instructions in response to the `ARMISD::VGETLANEs` selection-DAG node type. But `ARMISD::VGETLANEs` was never actually generated, because the code that creates it was conditioned on NEON only. It's an easy fix to enable the same code for integer MVE, and now IR that sign-extends the result of an extractelement (whether explicitly or as part of the function call ABI) will use `vmov.s8` instead of `vmov.u8` followed by `sxtb`. Reviewers: SjoerdMeijer, dmgreen, ostannard Subscribers: kristof.beyls, hiraditya, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D70132
* ARM: Don't emit R_ARM_NONE relocations to compact unwinding decoders in ↵Peter Collingbourne2019-11-122-9/+20
| | | | | | | | | | | | | | | | | | | | | | | | .ARM.exidx on Android. These relocations are specified by the ARM EHABI (section 6.3). As I understand it, their purpose is to accommodate unwinder implementations that wish to reduce code size by placing the implementations of the compact unwinding decoders in a separate translation unit, and using extern weak symbols to refer to them from the main unwinder implementation, so that they are only linked when something in the binary needs them in order to unwind. However, neither of the unwinders used on Android (libgcc, LLVM libunwind) use this technique, and in fact emitting these relocations ends up being counterproductive to code size because they cause a copy of the unwinder to be statically linked into most binaries, regardless of whether it is actually needed. Furthermore, these relocations create circular dependencies (between libc and the unwinder) in cases where the unwinder is dynamically linked and libc contains compact unwind info. Therefore, deviate from the EHABI here and stop emitting these relocations on Android. Differential Revision: https://reviews.llvm.org/D70027
* Use MCRegister in copyPhysRegMatt Arsenault2019-11-116-9/+9
|
* [AArch64][X86] Don't assume __powidf2 is available on Windows.Eli Friedman2019-11-081-56/+5
| | | | | | | | | | We had some code for this for 32-bit ARM, but this doesn't really need to be in target-specific code; generalize it. (I think this started showing up recently because we added an optimization that converts pow to powi.) Differential Revision: https://reviews.llvm.org/D69013
* Reland: [TII] Use optional destination and source pair as a return value; NFCDjordje Todorovic2019-11-082-26/+17
| | | | | | | | | | Refactor usage of isCopyInstrImpl, isCopyInstr and isAddImmediate methods to return optional machine operand pair of destination and source registers. Patch by Nikola Prica Differential Revision: https://reviews.llvm.org/D69622
OpenPOWER on IntegriCloud