summaryrefslogtreecommitdiffstats
path: root/llvm/lib/Target/AArch64
Commit message (Collapse)AuthorAgeFilesLines
...
* Add missing MIR serialization text for AArch64II::MO_TAGGED.Evgeniy Stepanov2019-08-151-3/+6
| | | | | | | | | | | | Reviewers: pcc Subscribers: javed.absar, kristof.beyls, hiraditya, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D66312 llvm-svn: 369053
* [llvm] Migrate llvm::make_unique to std::make_uniqueJonas Devlieghere2019-08-157-25/+25
| | | | | | | | Now that we've moved to C++14, we no longer need the llvm::make_unique implementation from STLExtras.h. This patch is a mechanical replacement of (hopefully) all the llvm::make_unique instances across the monorepo. llvm-svn: 369013
* [AArch64] Change location of frame-record within callee-save area.Sander de Smalen2019-08-153-26/+88
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch changes the location of the frame-record (FP, LR) to the bottom of the callee-saved area. According to the AAPCS the location of the frame-record within the stackframe is unspecified (section 5.2.3 The Frame Pointer), so the compiler should be free to choose a different location. The reason for changing the location of the frame-record is to prepare the frame for allocating an SVE area below the callee-saves. This way the compiler can use the VL-scaled addressing modes to directly access SVE objects from the frame-pointer. : : | stack | | stack | | args | | args | +-------+ +-------+ | x30 | | x19 | | x29 | | x20 | FP -> |- - - -| | x21 | | x19 | ==> | x22 | | x20 | |- - - -| | x21 | | x30 | | x22 | | x29 | +-------+ +-------+ <- FP |///////| |///////| // realignment gap |- - - -| |- - - -| |spills/| |spills/| | locals| | locals| SP -> +-------+ +-------+ <- SP Things to point out: - The algorithm to find a paired register should be prevented from accidentally pairing some callee-saved register with LR that is not FP, since they should always be paired together when the frame has a frame-record. - For Darwin platforms the location of the frame-record is unchanged, since the unwind encoding does not allow for encoding this position dynamically and other tools currently depend on the former layout. Reviewers: efriedma, rovka, rengolin, thegameg, greened, t.p.northover Reviewed By: efriedma Differential Revision: https://reviews.llvm.org/D65653 llvm-svn: 368987
* [AArch64][GlobalISel] Custom selection for s8 load acquire.Amara Emerson2019-08-141-1/+8
| | | | | | | | | Implement this single atomic load instruction so that we can compile stack protector code. Differential Revision: https://reviews.llvm.org/D66245 llvm-svn: 368923
* [AArch64][GlobalISel] RBS: Treat s128s like vectors when unmerging.Amara Emerson2019-08-131-1/+1
| | | | | | | | The destinations should be FPRs (for now). Differential Revision: https://reviews.llvm.org/D66184 llvm-svn: 368775
* [AArch64] Remove incorrect usage of MONonTemporal.Eli Friedman2019-08-131-2/+1
| | | | | | | This has no effect at the moment, but might matter if we try to implement non-temporal loads in the future. llvm-svn: 368770
* GlobalISel: Change representation of shuffle masksMatt Arsenault2019-08-131-44/+7
| | | | | | | | | | | | | | | | | | Currently shufflemasks get emitted as any other constant, and you end up with a bunch of virtual registers of G_CONSTANT with a G_BUILD_VECTOR. The AArch64 selector then asserts on anything that doesn't fit this pattern. This isn't an ideal representation, and should avoid legalization and have fewer opportunities for a representational error. Rather than invent a new shuffle mask operand type, similar to what ShuffleVectorSDNode does, just track the original IR Constant mask operand. I don't completely like the idea of adding another link to the IR, but MIR is already quite dependent on IR constants already, and this will allow sharing the shuffle mask utility functions with the IR. llvm-svn: 368704
* [AArch64][GlobalISel] Replace explicit vreg creation with implicit using ↵Amara Emerson2019-08-131-3/+4
| | | | | | SrcOp. NFC. llvm-svn: 368653
* [GlobalISel] Make the InstructionSelector instance non-const, allowing state ↵Amara Emerson2019-08-133-6/+16
| | | | | | | | | | | | | | | | to be maintained. Currently we can't keep any state in the selector object that we get from subtarget. As a result we have to plumb through all our variables through multiple functions. This change makes it non-const and adds a virtual init() method to allow further state to be captured for each target. AArch64 makes use of this in this patch to cache a call to hasFnAttribute() which is expensive to call, and is used on each selection of G_BRCOND. Differential Revision: https://reviews.llvm.org/D65984 llvm-svn: 368652
* [aarch64] Apply llvm-prefer-register-over-unsigned from clang-tidy to LLVMDaniel Sanders2019-08-1221-159/+159
| | | | | | | | | | | | | | | | | | | | | | | Summary: This clang-tidy check is looking for unsigned integer variables whose initializer starts with an implicit cast from llvm::Register and changes the type of the variable to llvm::Register (dropping the llvm:: where possible). Manual fixups in: AArch64InstrInfo.cpp - genFusedMultiply() now takes a Register* instead of unsigned* AArch64LoadStoreOptimizer.cpp - Ternary operator was ambiguous between Register/MCRegister. Settled on Register Depends on D65919 Reviewers: aemerson Subscribers: jholewinski, MatzeB, qcolombet, dschuff, jyknight, dylanmckay, sdardis, nemanjai, jvesely, wdng, nhaehnle, sbc100, jgravelle-google, kristof.beyls, hiraditya, aheejin, kbarton, fedor.sergeev, javed.absar, asb, rbar, johnrusso, simoncook, apazos, sabuasal, niosHD, jrtc27, MaskRay, zzheng, edward-jones, atanasyan, rogfer01, MartinMosbeck, brucehoult, the_o, tpr, PkmX, jocewei, jsji, Petar.Avramovic, asbirlea, Jim, s.egerton, llvm-commits Tags: #llvm Differential Revision for full review was: https://reviews.llvm.org/D65962 llvm-svn: 368628
* [globalisel] Add G_SEXT_INREGDaniel Sanders2019-08-091-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: Targets often have instructions that can sign-extend certain cases faster than the equivalent shift-left/arithmetic-shift-right. Such cases can be identified by matching a shift-left/shift-right pair but there are some issues with this in the context of combines. For example, suppose you can sign-extend 8-bit up to 32-bit with a target extend instruction. %1:_(s32) = G_SHL %0:_(s32), i32 24 # (I've inlined the G_CONSTANT for brevity) %2:_(s32) = G_ASHR %1:_(s32), i32 24 %3:_(s32) = G_ASHR %2:_(s32), i32 1 would reasonably combine to: %1:_(s32) = G_SHL %0:_(s32), i32 24 %2:_(s32) = G_ASHR %1:_(s32), i32 25 which no longer matches the special case. If your shifts and extend are equal cost, this would break even as a pair of shifts but if your shift is more expensive than the extend then it's cheaper as: %2:_(s32) = G_SEXT_INREG %0:_(s32), i32 8 %3:_(s32) = G_ASHR %2:_(s32), i32 1 It's possible to match the shift-pair in ISel and emit an extend and ashr. However, this is far from the only way to break this shift pair and make it hard to match the extends. Another example is that with the right known-zeros, this: %1:_(s32) = G_SHL %0:_(s32), i32 24 %2:_(s32) = G_ASHR %1:_(s32), i32 24 %3:_(s32) = G_MUL %2:_(s32), i32 2 can become: %1:_(s32) = G_SHL %0:_(s32), i32 24 %2:_(s32) = G_ASHR %1:_(s32), i32 23 All upstream targets have been configured to lower it to the current G_SHL,G_ASHR pair but will likely want to make it legal in some cases to handle their faster cases. To follow-up: Provide a way to legalize based on the constant. At the moment, I'm thinking that the best way to achieve this is to provide the MI in LegalityQuery but that opens the door to breaking core principles of the legalizer (legality is not context sensitive). That said, it's worth noting that looking at other instructions and acting on that information doesn't violate this principle in itself. It's only a violation if, at the end of legalization, a pass that checks legality without being able to see the context would say an instruction might not be legal. That's a fairly subtle distinction so to give a concrete example, saying %2 in: %1 = G_CONSTANT 16 %2 = G_SEXT_INREG %0, %1 is legal is in violation of that principle if the legality of %2 depends on %1 being constant and/or being 16. However, legalizing to either: %2 = G_SEXT_INREG %0, 16 or: %1 = G_CONSTANT 16 %2:_(s32) = G_SHL %0, %1 %3:_(s32) = G_ASHR %2, %1 depending on whether %1 is constant and 16 does not violate that principle since both outputs are genuinely legal. Reviewers: bogner, aditya_nandakumar, volkan, aemerson, paquette, arsenm Subscribers: sdardis, jvesely, wdng, nhaehnle, rovka, kristof.beyls, javed.absar, hiraditya, jrtc27, atanasyan, Petar.Avramovic, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D61289 llvm-svn: 368487
* [AArch64] Set pref. func. align to 8 bytes on Neoverse E1 & Cortex-A65Pablo Barrio2019-08-091-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: The Arm Neoverse E1 and Cortex-A65 Software Optimization Guide [1][2], Section "4.7 Branch instruction alignment" state: "It is preferable for branch targets, including subroutine entry points, to be placed on aligned 64-bit boundaries to maximize instruction fetch efficiency." This patch sets the preferred function alignment on Neoverse E1 and Cortex-A65 to 2^3=8B. This was already the case in some Cortex-A CPUs such as Cortex-A53. [1] https://developer.arm.com/docs/swog466751/latest/arm-neoversetm-e1-core-software-optimization-guide [2] https://developer.arm.com/docs/swog010045/latest/arm-cortex-a65-core-software-optimization-guide Reviewers: dmgreen, fhahn, samparker Subscribers: javed.absar, kristof.beyls, hiraditya, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D65937 llvm-svn: 368431
* AArch64: support TLS on Darwin platforms in GlobalISel.Tim Northover2019-08-091-4/+34
| | | | | | | All TLS access on Darwin is in the "general dynamic" form where we call a function to resolve the address, so implementation is pretty simple. llvm-svn: 368418
* GlobalISel: pack various parameters for lowerCall into a struct.Tim Northover2019-08-092-31/+17
| | | | | | | | | I've now needed to add an extra parameter to this call twice recently. Not only is the signature getting extremely unwieldy, but just updating all of the callsites and implementations is a pain. Putting the parameters in a struct sidesteps both issues. llvm-svn: 368408
* [AArch64] Do not emit '#' before immediates in inline asmPirama Arumuga Nainar2019-08-081-2/+1
| | | | | | | | | | | | | | | | | | | | | | | | Summary: The A64 assembly language does not require the '#' character to introduce constant immediate operands. Avoid the '#' since the AArch64 asm parser does not accept '#' before the lane specifier and rejects the following: __asm__ ("fmla v2.4s, v0.4s, v1.s[%0]" :: "I"(0x1)) Fix a test to not expect the '#' and add a new test case with the above asm. Fixes: https://github.com/android-ndk/ndk/issues/1036 Reviewers: peter.smith, kristof.beyls Subscribers: javed.absar, hiraditya, llvm-commits, srhines Tags: #llvm Differential Revision: https://reviews.llvm.org/D65550 llvm-svn: 368320
* [AArch64][WinCFI] Do not pair callee-save instructions in LoadStoreOptimizerSander de Smalen2019-08-071-0/+12
| | | | | | | | | | | | | | | Prevent the LoadStoreOptimizer from pairing any load/store instructions with instructions from the prologue/epilogue if the CFI information has encoded the operations as separate instructions. This would otherwise lead to a mismatch of the actual prologue size from the size as recorded in the Windows CFI. Reviewers: efriedma, mstorsjo, ssijaric Reviewed By: efriedma Differential Revision: https://reviews.llvm.org/D65817 llvm-svn: 368164
* [GISel]: Add GISelKnownBits analysisAditya Nandakumar2019-08-061-4/+13
| | | | | | | | | | | | | | https://reviews.llvm.org/D65698 This adds a KnownBits analysis pass for GISel. This was done as a pass (compared to static functions) so that we can add other features such as caching queries(within a pass and across passes) in the future. This patch only adds the basic pass boiler plate, and implements a lazy non caching knownbits implementation (ported from SelectionDAG). I've also hooked up the AArch64PreLegalizerCombiner pass to use this - there should be no compile time regression as the analysis is lazy. llvm-svn: 368065
* [globalisel] Allow SrcOp to convert an APInt and render it as an immediate ↵Daniel Sanders2019-08-062-2/+2
| | | | | | | | | | | | | | | | | | | | operand (MO.isImm() == true) Summary: This is tested by D61289 but has been pulled into a separate patch at a reviewers request. Reviewers: bogner, aditya_nandakumar, volkan, aemerson, paquette, arsenm, rovka Reviewed By: arsenm Subscribers: javed.absar, hiraditya, wdng, kristof.beyls, Petar.Avramovic, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D61321 llvm-svn: 368063
* [AArch64] NFC: Generalize emitFrameOffset to support more than byte offsets.Sander de Smalen2019-08-061-65/+83
| | | | | | | | | | | | | | Refactor emitFrameOffset to accept a StackOffset struct as its offset argument. This method currently only supports byte offsets (MVT::i8) but will be extended in a later patch to support scalable offsets (MVT::nxv1i8) as well. Reviewers: thegameg, rovka, t.p.northover, efriedma, greened Reviewed By: efriedma Differential Revision: https://reviews.llvm.org/D61436 llvm-svn: 368049
* AArch64: bail instead of asserting on unexpected type in G_CONSTANT 0.Tim Northover2019-08-061-2/+2
| | | | llvm-svn: 368031
* [AArch64] NFC: Add generic StackOffset to describe scalable offsets.Sander de Smalen2019-08-067-74/+199
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | To support spilling/filling of scalable vectors we need a more generic representation of a stack offset than simply 'int'. For this we introduce the StackOffset struct, which comprises multiple offsets sized by their respective MVTs. Byte-offsets will thus be a simple tuple such as { offset, MVT::i8 }. Adding two byte-offsets will result in a byte offset { offsetA + offsetB, MVT::i8 }. When two offsets have different types, we can canonicalise them to use the same MVT, as long as their runtime sizes are guaranteed to have the same size-ratio as they would have at compile-time. When we have both scalable- and fixed-size objects on the stack, we can create an offset that is: ({ offset_fixed, MVT::i8 } + { offset_scalable, MVT::nxv1i8 }) The struct also contains a getForFrameOffset() method that is specific to AArch64 and decomposes the frame-offset to be used directly in instructions that operate on the stack or index into the stack. Note: This patch adds StackOffset as an AArch64-only concept, but we would like to make this a generic concept/struct that is supported by all interfaces that take or return stack offsets (currently as 'int'). Since that would be a bigger change that is currently pending on D32530 landing, we thought it makes sense to first show/prove the concept in the AArch64 target before proposing to roll this out further. Reviewers: thegameg, rovka, t.p.northover, efriedma, greened Reviewed By: rovka, greened Differential Revision: https://reviews.llvm.org/D61435 llvm-svn: 368024
* AArch64: use xzr/wzr for constant 0 in GlobalISel.Tim Northover2019-08-061-0/+25
| | | | | | | COPYs from xzr and wzr can often be folded away entirely during register allocation, unlike a movz. llvm-svn: 368003
* [GlobalISel][CallLowering] Rename isArgumentHandler() -> ↵Amara Emerson2019-08-051-1/+1
| | | | | | | | | isIncomingArgumentHandler() Previous name and comment incorrectly implied it was just for formal arg handlers, which is not true. llvm-svn: 367945
* [AArch64][GlobalISel] Inline tiny memcpy et al at -O0.Amara Emerson2019-08-051-2/+5
| | | | | | | | | | | FastISel already does this since the initial arm64 port was upstreamed, so it seems there are no issues with doing this at -O0 for very small memcpys. Gives a 0.2% geomean code size improvement on CTMark. Differential Revision: https://reviews.llvm.org/D65758 llvm-svn: 367919
* [AArch64] Expand bcmp() for small block lengthsEvandro Menezes2019-08-053-0/+20
| | | | | | | | | | | | | | | | | | | | | | | | | Patch D56593 by @courbet results in calls to `bcmp()` in some cases, should the target support the it. Unless `TTI::MemCmpExpansionOptions()` is overridden by the target. In a proprietary benchmark we see a performance drop of about 12% on PNG compression before this patch, though it passes all tests. This patch mirrors X86 for AArch64 and initializes `TTI::MemCmpExpansionOptions()` to then expand calls to `bcmp()` when appropriate. No tuning of the parameters was performed, but, at this point, it's enough to recover the performance drop above. This problem also exists on ARM. Once a consensus is reached for AArch64, we can work to fix ARM as well. Authors: - Evandro Menezes (@evandro) <e.menezes@samsung.com> - Brian Rzycki (@brzycki) <b.rzycki@samsung.com> Differential revision: https://reviews.llvm.org/D64805 llvm-svn: 367898
* [AArch64] Set preferred function alignment to 16 bytes on Neoverse N1Pablo Barrio2019-08-051-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: The Arm Neoverse N1 Software Optimization Guide [1], Section "4.8 Branch instruction alignment" states: "Consider aligning subroutine entry points and branch targets to 32B boundaries, within the bounds of the code-density requirements of the program." This patch sets the preferred function alignment on Neoverse N1 to 2^4=16B. This was already the case in some of the latest Cortex-A CPUs. Benchmarking in previous Cortex-A CPUs suggested that 16B alignment is already better than the default. See commit d04ee305. The reason we don't set it to 32B right now (as the optimisation guide suggests) is that this will impact code size and perhaps the instruction cache performance. Therefore we need benchmark numbers first. I have also added testing for A75 and A76 that we were missing. [1] https://developer.arm.com/docs/swog309707/latest Reviewers: fhahn, greened, samparker, dmgreen Reviewed By: dmgreen Subscribers: dmgreen, javed.absar, kristof.beyls, hiraditya, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D65654 llvm-svn: 367894
* [AArch64] Implement initial SVE calling convention supportCullen Rhodes2019-08-053-1/+83
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: This patch adds initial support for the SVE calling convention such that SVE types can be passed as arguments and return values to/from a subroutine. The SVE AAPCS states [1]: z0-z7 are used to pass scalable vector arguments to a subroutine, and to return scalable vector results from a function. If a subroutine takes arguments in scalable vector or predicate registers, or if it is a function that returns results in such registers, it must ensure that the entire contents of z8-z23 are preserved across the call. In other cases it need only preserve the low 64 bits of z8-z15, as described in §5.1.2. p0-p3 are used to pass scalable predicate arguments to a subroutine and to return scalable predicate results from a function. If a subroutine takes arguments in scalable vector or predicate registers, or if it is a function that returns results in these registers, it must ensure that p4-p15 are preserved across the call. In other cases it need not preserve any scalable predicate register contents. SVE predicate and data registers are passed indirectly (i.e. spilled to the stack and pass the address) if they exceed the registers used for argument passing defined by the PCS referenced above. Until SVE stack support is merged we can't spill SVE registers to the stack, so currently an llvm_unreachable is used where we will eventually handle this. [1] https://static.docs.arm.com/100986/0000/100986_0000.pdf Reviewed By: ostannard Differential Revision: https://reviews.llvm.org/D65448 llvm-svn: 367859
* [AArch64] Skip isZIPMask check for masks with an odd number of elements.Florian Hahn2019-08-051-0/+2
| | | | | | | | | | | | | We process 2 elements at a time and expect the number of elements to be even. Similar to D60690. Reviewers: dmgreen, samparker, t.p.northover Reviewed By: dmgreen Differential Revision: https://reviews.llvm.org/D65400 llvm-svn: 367831
* [LLVM][Alignment] Introduce Alignment TypeGuillaume Chatelet2019-08-051-6/+6
| | | | | | | | | | | | | | | | | | | Summary: This is patch is part of a serie to introduce an Alignment type. See this thread for context: http://lists.llvm.org/pipermail/llvm-dev/2019-July/133851.html See this patch for the introduction of the type: https://reviews.llvm.org/D64790 Reviewers: courbet, jfb, jakehehrlich Reviewed By: jfb Subscribers: wuzish, jholewinski, arsenm, dschuff, nemanjai, jvesely, nhaehnle, javed.absar, sbc100, jgravelle-google, hiraditya, aheejin, kbarton, asb, rbar, johnrusso, simoncook, apazos, sabuasal, niosHD, jrtc27, MaskRay, zzheng, edward-jones, rogfer01, MartinMosbeck, brucehoult, the_o, dexonsmith, PkmX, jocewei, jsji, s.egerton, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D65514 llvm-svn: 367828
* Emit diagnostic if an inline asm constraint requires an immediateBill Wendling2019-08-031-2/+10
| | | | | | | | | | | | | | | | | | Summary: An inline asm call can result in an immediate after inlining. Therefore emit a diagnostic here if constraint requires an immediate but one isn't supplied. Reviewers: joerg, mgorny, efriedma, rsmith Reviewed By: joerg Subscribers: asb, rbar, johnrusso, simoncook, apazos, sabuasal, niosHD, zzheng, edward-jones, rogfer01, MartinMosbeck, brucehoult, the_o, PkmX, jocewei, s.egerton, MaskRay, jyknight, dylanmckay, javed.absar, fedor.sergeev, jrtc27, Jim, krytarowski, eraman, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D60942 llvm-svn: 367750
* Re-commit "[GlobalISel] Add legalization support for non-power-2 loads and ↵Amara Emerson2019-08-021-8/+5
| | | | | | | | | | stores"" This is an old commit that exposed a bug in the GISel importer, which caused non-truncating stores to be selected for truncating store patterns. Now that's been fixed in r367737 this can go back in. llvm-svn: 367739
* [AArch64][GlobalISel] Eliminate redundant G_ZEXT when the source is ↵Amara Emerson2019-08-021-0/+17
| | | | | | | | | | | | implicitly zext-loaded. These cases can come up when the extending loads combiner doesn't combine a zext(load) to a zextload op, due to some other operation being in between, which then gets simplified at a later stage. Differential Revision: https://reviews.llvm.org/D65360 llvm-svn: 367723
* [AArch64][GlobalISel] Support the neg_addsub_shifted_imm32 patternJessica Paquette2019-08-022-12/+65
| | | | | | | | | | | Add an equivalent ComplexRendererFns function for SelectNegArithImmed. This allows us to select immediate adds of -1 by turning them into subtracts. Update select-binop.mir to show that the pattern works. Differential Revision: https://reviews.llvm.org/D65460 llvm-svn: 367700
* GlobalISel: support swiftself attributeTim Northover2019-08-021-0/+1
| | | | llvm-svn: 367683
* Finish moving TargetRegisterInfo::isVirtualRegister() and friends to ↵Daniel Sanders2019-08-019-53/+48
| | | | | | llvm::Register as started by r367614. NFC llvm-svn: 367633
* [AArch64] Do not allocate unnecessary emergency slot.Sander de Smalen2019-08-011-2/+2
| | | | | | | | | | | | | | Fix an issue where the compiler still allocates an emergency spill slot even though it already decided to spill an extra callee-save register to use as a scratch register. Reviewers: gberry, thegameg, mstorsjo, t.p.northover Reviewed By: thegameg Differential Revision: https://reviews.llvm.org/D65504 llvm-svn: 367540
* [GISel] Address review feedback on passing MD_callees to lowerCall.Mark Lacey2019-07-311-2/+2
| | | | | | | Preserve the nullptr default for KnownCallees that appears in the base class. llvm-svn: 367477
* [GISel] Pass MD_callees metadata down in call lowering.Mark Lacey2019-07-312-5/+8
| | | | | | | | | | | | | | | | | | | | Summary: This will make it possible to improve IPRA by taking into account register usage in indirect calls. NFC yet; this is just laying the groundwork to start building up patches to take advantage of the information for improved register allocation. Reviewers: aditya_nandakumar, volkan, qcolombet, arsenm, rovka, aemerson, paquette Subscribers: sdardis, wdng, javed.absar, hiraditya, jrtc27, atanasyan, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D65488 llvm-svn: 367476
* AArch64: Add a tagged-globals backend feature.Peter Collingbourne2019-07-317-1/+42
| | | | | | | | | | | | | | | | | | | This feature instructs the backend to allow locally defined global variable addresses to contain a pointer tag in bits 56-63 that will be ignored by the hardware (i.e. TBI), but may be used by an instrumentation pass such as HWASAN. It works by adding a MOVK instruction to the regular ADRP/ADD sequence that sets bits 48-63 to the corresponding bits of the global, with the linker bounds check disabled on the ADRP instruction to prevent the tag from causing a link failure. This implementation of the feature omits the MOVK when loading from or storing to a global, which is sufficient for TBI. If the same approach is extended to MTE, assuming that 0 is not configured as a catch-all tag, we will most likely also need the MOVK in this case in order to avoid a tag mismatch. Differential Revision: https://reviews.llvm.org/D65364 llvm-svn: 367475
* SelectionDAG, MI, AArch64: Widen target flags fields/arguments from unsigned ↵Peter Collingbourne2019-07-316-13/+12
| | | | | | | | | | | | | char to unsigned. This makes the field wider than MachineOperand::SubReg_TargetFlags so that we don't end up silently truncating any higher bits. We should still catch any bits truncated from the MachineOperand field as a consequence of the assertion in MachineOperand::setTargetFlags(). Differential Revision: https://reviews.llvm.org/D65465 llvm-svn: 367474
* [AArch64] Add support for Transactional Memory Extension (TME)Momchil Velikov2019-07-314-12/+78
| | | | | | | | | | | | | | | | | | | | | | | | | Re-commit r366322 after some fixes TME is a future architecture technology, documented in https://developer.arm.com/architectures/cpu-architecture/a-profile/exploration-tools https://developer.arm.com/docs/ddi0601/a More about the future architectures: https://community.arm.com/developer/ip-products/processors/b/processors-ip-blog/posts/new-technologies-for-the-arm-a-profile-architecture This patch adds support for the TME instructions TSTART, TTEST, TCOMMIT, and TCANCEL and the target feature/arch extension "tme". It also implements TME builtin functions, defined in ACLE Q2 2019 (https://developer.arm.com/docs/101028/latest) Differential Revision: https://reviews.llvm.org/D64416 Patch by Javed Absar and Momchil Velikov llvm-svn: 367428
* [AArch64][SVE2] Load/store instruction fixesCullen Rhodes2019-07-312-37/+30
| | | | | | | | | | | | | | | | Summary: * Loads and stores in SVE2 are gather/scatter not contiguous, fixed by renaming multiclasses to reflect this and also updated comments. * Remove aliases from load/store multiclasses that reflect the behaviour of the original form. * Fix bug in scatter store implementation, vector list should be used as input, not output. Reviewed By: sdesmalen Differential Revision: https://reviews.llvm.org/D65392 llvm-svn: 367398
* [AArch64][SVE2] Minor refactoring and cleanupCullen Rhodes2019-07-312-26/+26
| | | | | | | | | | | | | | | | | Summary: * Clarify comment with SVE2 for predicated shifts and move next to other shift instructions. * Clarify comments for various instructions. * Move FCVTX instruction next to other fp conversions. * Move FLOGB to next to other fp instructions and fix description. * Remove "cons" from non-constructive multiclass for bitwise shift-right and accumulate instructions. Reviewed By: sdesmalen Differential Revision: https://reviews.llvm.org/D65390 llvm-svn: 367396
* [AArch64][SVE2] Use destination register as source registerCullen Rhodes2019-07-312-86/+207
| | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: This patch fixes a bug in the following instructions that should have been implemented as destructive. A destructive instruction is an instruction where one of the source registers also acts as the destination register. Therefore, the contents of the source register, when the instruction begins execution, are replaced by the result of the instruction when the instruction completes execution [1]: * SRI/SLI * EORBT/EORTB * TBX * Narrowing top instructions * FP convert precision instructions These changes are non-functional from the assembler/diassembler point-of-view but are necessary for correct codegen. [1] https://static.docs.arm.com/ddi0584/ae/DDI0584A_e_SVE_supp_armv8A.pdf Reviewed By: sdesmalen Differential Revision: https://reviews.llvm.org/D65389 llvm-svn: 367394
* [AArch64][GlobalISel] Implement narrowing of G_SEXT.Amara Emerson2019-07-261-20/+26
| | | | | | | | We need this to narrow a sext to s128. Differential Revision: https://reviews.llvm.org/D65357 llvm-svn: 367164
* [AArch64][GlobalISel] Select @llvm.aarch64.stlxr for 32-bit pointersJessica Paquette2019-07-261-3/+21
| | | | | | | | | | | | | | | | | | | Add partial instruction selection for intrinsics like this: ``` declare i32 @llvm.aarch64.stlxr(i64, i32*) ``` (This only handles the case where a G_ZEXT is feeding the intrinsic.) Also make sure that the added store instruction actually has the memory op from the original G_STORE. Update select-stlxr-intrin.mir and arm64-ldxr-stxr.ll. Differential Revision: https://reviews.llvm.org/D65355 llvm-svn: 367163
* [AArch64][SVE2] Rename bitperm feature to sve2-bitpermCullen Rhodes2019-07-263-3/+3
| | | | | | | | | | | | | | | | Summary: The bitperm feature flag is now prefixed with SVE2, as it is for all other SVE2 extensions Patch by Maciej Gabka. Reviewers: sdesmalen, rovka, chill, SjoerdMeijer, rengolin Reviewed By: SjoerdMeijer, rengolin Differential Revision: https://reviews.llvm.org/D65327 llvm-svn: 367124
* [AArch64] Define ETE and TRBE system registersMomchil Velikov2019-07-264-1/+45
| | | | | | | | | | | | | | | | | | | | Embedded Trace Extension and Trace Buffer Extension are optional future architecture extensions. (cf. https://developer.arm.com/architectures/cpu-architecture/a-profile/exploration-tools) Their system registers are documented here: https://developer.arm.com/docs/ddi0601/a ETE shares register names with ETM. One exception is the ETE TRCEXTINSELR0 register, which has the same encoding as the ETM TRCEXTINSELR register (but different semantics). This patch treats them as aliases: the assembler will accept both names, emitting identical encoding, and the disassembler will keep disassembling to TRCEXRINSELR. Differential Revision: https://reviews.llvm.org/D63707 llvm-svn: 367093
* [AArch64][GlobalISel] Simplify zext/sext selection, use MachineIRBuilder. NFC.Amara Emerson2019-07-261-32/+28
| | | | llvm-svn: 367075
* [AArch64][GlobalISel] Fix G_SELECT legalization fallback after r366943.Amara Emerson2019-07-251-1/+1
| | | | | | Changes the order of legalization of G_ICMP suggested by Petar in D65079. llvm-svn: 367060
OpenPOWER on IntegriCloud