summaryrefslogtreecommitdiffstats
path: root/llvm/lib/Target/AArch64
Commit message (Collapse)AuthorAgeFilesLines
...
* [aarch64] Add combine patterns for fp16 fmlaSebastian Pop2019-09-071-62/+280
| | | | | | | | | This patch enables generation of fused multiply add/sub for instructions operating on fp16. Tested on aarch64-linux. Differential Revision: https://reviews.llvm.org/D67297 llvm-svn: 371321
* [AArch64][GlobalISel] Enable the localizer for optimized builds.Amara Emerson2019-09-061-3/+1
| | | | | | | | | | | | | | | | | | | Despite the fact that the localizer's original motivation was to fix horrendous constant spilling at -O0, shortening live ranges still has net benefits even with optimizations enabled. On an -Os build of CTMark, doing this improves code size by 0.5% geomean. There are a few regressions, bullet increasing in size by 0.5%. One example from bullet where code size increased slightly was due to GlobalISel actually now generating the same code as SelectionDAG. So we actually have an opportunity in future to implement better heuristics for localization and therefore be *better* than SDAG in some cases. In relation to other optimizations though that one is relatively minor. Differential Revision: https://reviews.llvm.org/D67303 llvm-svn: 371266
* [AArch64][GlobalISel] Always fall back on tail calls with -tailcalloptJessica Paquette2019-09-061-0/+6
| | | | | | | | | | | | | | | | | | -tailcallopt requires that we perform different stack adjustments than with sibling calls. For example, the `@caller_to0_from8` function in test/CodeGen/AArch64/tail-call.ll requires that we adjust SP. Without -tailcallopt, this adjustment does not happen. With it, however, it is expected. So, to ensure that adding sibling call support doesn't break -tailcallopt, make CallLowering always fall back on possible tail calls when -tailcallopt is passed in. Update test/CodeGen/AArch64/tail-call.ll with a GlobalISel line to make sure that we don't differ from the SDAG implementation at any point. Differential Revision: https://reviews.llvm.org/D67245 llvm-svn: 371227
* [Alignment][NFC] Use Align with TargetLowering::setPrefFunctionAlignmentGuillaume Chatelet2019-09-061-1/+2
| | | | | | | | | | | | | | | | | Summary: This is patch is part of a series to introduce an Alignment type. See this thread for context: http://lists.llvm.org/pipermail/llvm-dev/2019-July/133851.html See this patch for the introduction of the type: https://reviews.llvm.org/D64790 Reviewers: courbet Subscribers: nemanjai, javed.absar, hiraditya, kbarton, asb, rbar, johnrusso, simoncook, apazos, sabuasal, niosHD, jrtc27, MaskRay, zzheng, edward-jones, rogfer01, MartinMosbeck, brucehoult, the_o, PkmX, jocewei, jsji, s.egerton, pzheng, ychen, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D67267 llvm-svn: 371212
* [Alignment][NFC] Use Align with TargetLowering::setPrefLoopAlignmentGuillaume Chatelet2019-09-061-1/+1
| | | | | | | | | | | | | | | | | Summary: This is patch is part of a series to introduce an Alignment type. See this thread for context: http://lists.llvm.org/pipermail/llvm-dev/2019-July/133851.html See this patch for the introduction of the type: https://reviews.llvm.org/D64790 Reviewers: courbet Subscribers: nemanjai, hiraditya, kbarton, MaskRay, jsji, ychen, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D67278 llvm-svn: 371210
* [Alignment][NFC] Use Align with TargetLowering::setMinFunctionAlignmentGuillaume Chatelet2019-09-061-1/+1
| | | | | | | | | | | | | | | | | Summary: This is patch is part of a series to introduce an Alignment type. See this thread for context: http://lists.llvm.org/pipermail/llvm-dev/2019-July/133851.html See this patch for the introduction of the type: https://reviews.llvm.org/D64790 Reviewers: courbet Subscribers: jyknight, sdardis, nemanjai, javed.absar, hiraditya, kbarton, fedor.sergeev, asb, rbar, johnrusso, simoncook, apazos, sabuasal, niosHD, jrtc27, MaskRay, zzheng, edward-jones, atanasyan, rogfer01, MartinMosbeck, brucehoult, the_o, PkmX, jocewei, jsji, s.egerton, pzheng, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D67229 llvm-svn: 371200
* Recommit "[AArch64][GlobalISel] Teach AArch64CallLowering to handle basic ↵Jessica Paquette2019-09-052-7/+190
| | | | | | | | | | | | | | | | | | sibling calls" Recommit basic sibling call lowering (https://reviews.llvm.org/D67189) The issue was that if you have a return type other than void, call lowering will emit COPYs to get the return value after the call. Disallow sibling calls other than ones that return void for now. Also proactively disable swifterror tail calls for now, since there's a similar issue with COPYs there. Update call-translator-tail-call.ll to include test cases for each of these things. llvm-svn: 371114
* Revert rL370996 from llvm/trunk: [AArch64][GlobalISel] Teach ↵Simon Pilgrim2019-09-052-173/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | AArch64CallLowering to handle basic sibling calls This adds support for basic sibling call lowering in AArch64. The intent here is to only handle tail calls which do not change the ABI (hence, sibling calls.) At this point, it is very restricted. It does not handle - Vararg calls. - Calls with outgoing arguments. - Calls whose calling conventions differ from the caller's calling convention. - Tail/sibling calls with BTI enabled. This patch adds - `AArch64CallLowering::isEligibleForTailCallOptimization`, which is equivalent to the same function in AArch64ISelLowering.cpp (albeit with the restrictions above.) - `mayTailCallThisCC` and `canGuaranteeTCO`, which are identical to those in AArch64ISelLowering.cpp. - `getCallOpcode`, which is exactly what it sounds like. Tail/sibling calls are lowered by checking if they pass target-independent tail call positioning checks, and checking if they satisfy `isEligibleForTailCallOptimization`. If they do, then a tail call instruction is emitted instead of a normal call. If we have a sibling call (which is always the case in this patch), then we do not emit any stack adjustment operations. When we go to lower a return, we check if we've already emitted a tail call. If so, then we skip the return lowering. For testing, this patch - Adds call-translator-tail-call.ll to test which tail calls we currently lower, which ones we don't, and which ones we shouldn't. - Updates branch-target-enforcement-indirect-calls.ll to show that we fall back as expected. Differential Revision: https://reviews.llvm.org/D67189 ........ This fails on EXPENSIVE_CHECKS builds due to a -verify-machineinstrs test failure in CodeGen/AArch64/dllimport.ll llvm-svn: 371051
* [LLVM][Alignment] Make functions using log of alignment explicitGuillaume Chatelet2019-09-053-23/+25
| | | | | | | | | | | | | | | | | | | | | Summary: This patch renames functions that takes or returns alignment as log2, this patch will help with the transition to llvm::Align. The renaming makes it explicit that we deal with log(alignment) instead of a power of two alignment. A few renames uncovered dubious assignments: - `MirParser`/`MirPrinter` was expecting powers of two but `MachineFunction` and `MachineBasicBlock` were using deal with log2(align). This patch fixes it and updates the documentation. - `MachineBlockPlacement` exposes two flags (`align-all-blocks` and `align-all-nofallthru-blocks`) supposedly interpreted as power of two alignments, internally these values are interpreted as log2(align). This patch updates the documentation, - `MachineFunctionexposes` exposes `align-all-functions` also interpreted as power of two alignment, internally this value is interpreted as log2(align). This patch updates the documentation, Reviewers: lattner, thegameg, courbet Subscribers: dschuff, arsenm, jyknight, dylanmckay, sdardis, nemanjai, jvesely, nhaehnle, javed.absar, hiraditya, kbarton, fedor.sergeev, asb, rbar, johnrusso, simoncook, apazos, sabuasal, niosHD, jrtc27, MaskRay, zzheng, edward-jones, atanasyan, rogfer01, MartinMosbeck, brucehoult, the_o, dexonsmith, PkmX, jocewei, jsji, Jim, s.egerton, llvm-commits, courbet Tags: #llvm Differential Revision: https://reviews.llvm.org/D65945 llvm-svn: 371045
* [AArch64][GlobalISel] Teach AArch64CallLowering to handle basic sibling callsJessica Paquette2019-09-042-7/+173
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This adds support for basic sibling call lowering in AArch64. The intent here is to only handle tail calls which do not change the ABI (hence, sibling calls.) At this point, it is very restricted. It does not handle - Vararg calls. - Calls with outgoing arguments. - Calls whose calling conventions differ from the caller's calling convention. - Tail/sibling calls with BTI enabled. This patch adds - `AArch64CallLowering::isEligibleForTailCallOptimization`, which is equivalent to the same function in AArch64ISelLowering.cpp (albeit with the restrictions above.) - `mayTailCallThisCC` and `canGuaranteeTCO`, which are identical to those in AArch64ISelLowering.cpp. - `getCallOpcode`, which is exactly what it sounds like. Tail/sibling calls are lowered by checking if they pass target-independent tail call positioning checks, and checking if they satisfy `isEligibleForTailCallOptimization`. If they do, then a tail call instruction is emitted instead of a normal call. If we have a sibling call (which is always the case in this patch), then we do not emit any stack adjustment operations. When we go to lower a return, we check if we've already emitted a tail call. If so, then we skip the return lowering. For testing, this patch - Adds call-translator-tail-call.ll to test which tail calls we currently lower, which ones we don't, and which ones we shouldn't. - Updates branch-target-enforcement-indirect-calls.ll to show that we fall back as expected. Differential Revision: https://reviews.llvm.org/D67189 llvm-svn: 370996
* [AArch64][GlobalISel] Legalize 128 bit divisions to libcalls.Amara Emerson2019-09-031-0/+1
| | | | | | | | | Now that we have the infrastructure to support s128 types as parameters we can expand these to libcalls. Differential Revision: https://reviews.llvm.org/D66185 llvm-svn: 370823
* [GlobalISel][CallLowering] Add support for splitting types according to ↵Amara Emerson2019-09-031-7/+8
| | | | | | | | | | | | | | calling conventions. On AArch64, s128 types have to be split into s64 GPRs when passed as arguments. This change adds the generic support in call lowering for dealing with multiple registers, for incoming and outgoing args. Support for splitting for return types not yet implemented. Differential Revision: https://reviews.llvm.org/D66180 llvm-svn: 370822
* [AArch64][GlobalISel] Don't import i64imm_32bit pattern at -O0Jessica Paquette2019-09-031-0/+11
| | | | | | | | | | | | | This pattern, when imported at -O0 adds an extra copy via the SUBREG_TO_REG. This is because the SUBREG_TO_REG is not eliminated. At all other opt levels, it is eliminated. This is a 1% geomean code size savings at -O0 on CTMark. Differential Revision: https://reviews.llvm.org/D67027 llvm-svn: 370789
* [SVE][Inline-Asm] Fix -Wimplicit-fallthrough in AArch64ISelLowering.cppKerry McLaughlin2019-09-031-0/+1
| | | | | | | | | | | | | | | | Summary: Adds break to 'x' case in getRegForInlineAsmConstraint added by D66302, fixing the unintentional fallthrough. Reviewers: sdesmalen, rovka, cameron.mcinally, greened, gribozavr, ruiu Reviewed By: sdesmalen Subscribers: bjope, javed.absar, tschuett, kristof.beyls, rkruppe, psnobl, llvm-commits, cfe-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D67095 llvm-svn: 370769
* [SVE][Inline-Asm] Support for SVE asm operandsKerry McLaughlin2019-09-024-7/+89
| | | | | | | | | | | | | | | | | | | | | | | | Summary: Adds the following inline asm constraints for SVE: - w: SVE vector register with full range, Z0 to Z31 - x: Restricted to registers Z0 to Z15 inclusive. - y: Restricted to registers Z0 to Z7 inclusive. This change also adds the "z" modifier to interpret a register as an SVE register. Not all of the bitconvert patterns added by this patch are used, but they have been included here for completeness. Reviewers: t.p.northover, sdesmalen, rovka, momchil.velikov, rengolin, cameron.mcinally, greened Reviewed By: sdesmalen Subscribers: javed.absar, tschuett, rkruppe, psnobl, cfe-commits, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D66302 llvm-svn: 370673
* MemTag: unchecked load/store optimization.Evgeniy Stepanov2019-08-306-1/+247
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: MTE allows memory access to bypass tag check iff the address argument is [SP, #imm]. This change takes advantage of this to demote uses of tagged addresses to regular FrameIndex operands, reducing register pressure in large functions. MO_TAGGED target flag is used to signal that the FrameIndex operand refers to memory that might be tagged, and needs to be handled with care. Such operand must be lowered to [SP, #imm] directly, without a scratch register. The transformation pass attempts to predict when the offset will be out of range and disable the optimization. AArch64RegisterInfo::eliminateFrameIndex has an escape hatch in case this prediction has been wrong, but it is quite inefficient and should be avoided. Reviewers: pcc, vitalybuka, ostannard Subscribers: mgorny, javed.absar, kristof.beyls, hiraditya, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D66457 llvm-svn: 370490
* [AArch64][GlobalISel] Select arithmetic extended register patternsJessica Paquette2019-08-293-36/+224
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This teaches GISel to select patterns which fold an extend plus optional shift into the addressing mode. In particular, adds and subs. Factor out the arith extended register ComplexPatterns in AArch64InstrFormats.td and create GISel equivalents. Add some equivalent functions to the ones in AArch64ISelDAGToDAG: - `selectArithExtendedRegister` - `narrowExtendRegIfNeeded` - `getExtendTypeForInst` `getExtendTypeForInst` includes the checks for loads and stores. This will be used for WRO addressing modes in loads + stores. Teach selectCopy to properly handle subregister copies on the same bank in order to support `narrowExtendRegIfNeeded`. The extended register must be a GPR32, so we need to support same-bank subregister copies. Fix a bug in getSubRegForClass which would cause registers on things like GPR32common to end up getting ssub. Just change the check to look for FPR32 rather than GPR32. For tests: - Add select-arith-extended-reg.mir - Update addsub_ext.ll to include GlobalISel checks Differential Revision: https://reviews.llvm.org/D66835 llvm-svn: 370410
* GlobalISel: Add known bits to InstructionSelectorMatt Arsenault2019-08-291-2/+3
| | | | | | | | AMDGPU uses this for some addressing mode selection patterns. The analysis run itself doesn't do anything so it seems easier to just always require this than adding a way to opt in. llvm-svn: 370388
* [GlobalISel][AArch64] Select llvm.aarch64.stxr* intrinsics.Jessica Paquette2019-08-291-4/+12
| | | | | | | | | | Add a GISelPredicateCode to the stxr_* PatFrags in AArch64InstrAtomics.td. This allows us to select these intrinsics. Differential Revision: https://reviews.llvm.org/D65779 llvm-svn: 370382
* [GlobalISel][AArch64] Use a GISelPredicateCode to select llvm.aarch64.stlxr.*Jessica Paquette2019-08-292-64/+12
| | | | | | | | | | | | Remove manual selection code for this intrinsic and use a GISelPredicateCode instead. This allows us to fully select this intrinsic without any tricky custom C++ matching. Differential Revision: https://reviews.llvm.org/D65780 llvm-svn: 370380
* [AArch64][GlobalISel] Select @llvm.aarch64.ldxr.* intrinsicsJessica Paquette2019-08-291-4/+12
| | | | | | | | | | | Same thing as D66897, but for ldxr.* instead. Add a GISelPredicateCode to the ldxr_* definitions, which allows us to import them. Add select-ldxr-intrin.mir, and update arm64-ldxr-stxr.ll. Differential Revision: https://reviews.llvm.org/D66898 llvm-svn: 370378
* [AArch64][GlobalISel] Select @llvm.aarch64.ldaxr.* intrinsicsJessica Paquette2019-08-292-4/+24
| | | | | | | | | | | | | | Add a GISelPredicateCode to ldaxr_*. This allows us to import the patterns for @llvm.aarch64.ldaxr.*, and thus select them. Add `isLoadStoreOfNumBytes` for the GISelPredicateCode, since each of these intrinsics involves the same check. Add select-ldaxr-intrin.mir, and update arm64-ldxr-stxr.ll. Differential Revision: https://reviews.llvm.org/D66897 llvm-svn: 370377
* [RISCV] Avoid generating AssertZext for LP64 ABI when lowering floating LibCallShiva Chen2019-08-281-3/+3
| | | | | | | | | | | | | | | | | | | | | | | | | The patch fixed the issue that RV64 didn't clear the upper bits when return complex floating value with lp64 ABI. float _Complex complex_add(float _Complex a, float _Complex b) { return a + b; } RealResult = zero_extend(RealA + RealB) ImageResult = ImageA + ImageB Return (RealResult | (ImageResult << 32)) The patch introduces shouldExtendTypeInLibCall target hook to suppress the AssertZext generation when lowering floating LibCall. Thanks to Eli's comments from the Bugzilla https://bugs.llvm.org/show_bug.cgi?id=42820 Differential Revision: https://reviews.llvm.org/D65497 llvm-svn: 370275
* [AArch64][GlobalISel] Fall back when translating musttail callsJessica Paquette2019-08-281-0/+5
| | | | | | | | | | These are currently translated as normal functions calls in AArch64. Until we have proper tail call lowering, we shouldn't translate these. Differential Revision: https://reviews.llvm.org/D66842 llvm-svn: 370225
* [SelectionDAG] Don't generate libcalls for wide shifts on Windows (PR42711)Hans Wennborg2019-08-282-5/+9
| | | | | | | | | Neither libgcc or compiler-rt are usually used on Windows, so these functions can't be called. Differential revision: https://reviews.llvm.org/D66880 llvm-svn: 370204
* [GlobalISel] Replace hard coded dynamic alloca handling with G_DYN_STACKALLOC.Amara Emerson2019-08-271-0/+2
| | | | | | | | | | | This change moves the actual stack pointer manipulation into the legalizer, available to targets via lower(). The codegen is slightly different because we're using explicit masks instead of G_PTRMASK, and using G_SUB rather than adding a negative amount via G_GEP. Differential Revision: https://reviews.llvm.org/D66678 llvm-svn: 370104
* AArch64: avoid creating cycle in DAG for post-increment NEON ops.Tim Northover2019-08-271-1/+1
| | | | | | | | | | | Inserting a value into Visited has the effect of terminating a search for predecessors if that node is seen. This is legitimate for the base address, and acts as a slight performance optimization, but the vector-building node can be paert of a legitimate cycle so we shouldn't stop searching there. PR43056. llvm-svn: 370036
* Use a bit of relaxed constexpr to make FeatureBitset costant intializableBenjamin Kramer2019-08-241-6/+7
| | | | | | | | | | | This requires std::intializer_list to be a literal type, which it is starting with C++14. The downside is that std::bitset is still not constexpr-friendly so this change contains a re-implementation of most of it. Shrinks clang by ~60k. llvm-svn: 369847
* [AArch64][GlobalISel] Import XRO load/store patterns instead of custom selectionJessica Paquette2019-08-232-66/+42
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Instead of using custom C++ in `earlySelect` for loads and stores, just import the patterns. Remove `earlySelectLoad`, since we can just import the work it's doing. Some minor changes to how `ComplexRendererFns` are returned for the XRO addressing modes. If you add immediates in two steps, sometimes they are not imported properly and you only end up with one immediate. I'm not sure if this is intentional. - Update load-addressing-modes.mir to include the instructions we can now import. - Add a similar test, store-addressing-modes.mir to show which store opcodes we currently import, and show that we can pull in shifts etc. - Update arm64-fastisel-gep-promote-before-add.ll to use FastISel instead of GISel. This test failed with GISel because GISel folds the gep into the load. The test checks that FastISel doesn't fold non-pointer-width adds into loads. GISel on the other hand, produces a G_CONSTANT of -128 for the add, and then a G_GEP, which must be pointer-width. Note that we don't get STRBRoX right now. It seems like the importer can't handle `FPR8Op:{ *:[Untyped] }:$Rt` source operands. So, those are not currently supported. Differential Revision: https://reviews.llvm.org/D66679 llvm-svn: 369806
* Do a sweep of symbol internalization. NFC.Benjamin Kramer2019-08-231-3/+3
| | | | llvm-svn: 369803
* Use VT::getHalfNumVectorElementsVT helpers in a few places. NFCI.Simon Pilgrim2019-08-231-5/+4
| | | | llvm-svn: 369751
* [MC] Minor cleanup to MCFixup::Kind handling. NFC.Sam Clegg2019-08-233-6/+5
| | | | | | | | | | Prefer `MCFixupKind` where possible and add getTargetKind() to convert to `unsigned` when needed rather than scattering cast operators around the place. Differential Revision: https://reviews.llvm.org/D59890 llvm-svn: 369720
* [MachO][TLOF] Use hasLocalLinkage to determine if indirect symbol is localFrancis Visoiu Mistrih2019-08-222-3/+4
| | | | | | | | | | | | | | | | | | | | | Local symbols in the indirect symbol table contain the value `INDIRECT_SYMBOL_LOCAL` and the corresponding __pointers entry must contain the address of the target. In r349060, I added support for local symbols in the indirect symbol table, which was checking if the symbol `isDefined` && `!isExternal` to determine if the symbol is local or not. It turns out that `isDefined` will return false if the user of the symbol comes before its definition, and we'll again generate .long 0 which will be the symbol at the adress 0x0. Instead of doing that, use GlobalValue::hasLocalLinkage() to check if the symbol is local. Differential Revision: https://reviews.llvm.org/D66563 llvm-svn: 369671
* [TargetLowering] Remove optional arguments passing to makeLibCallShiva Chen2019-08-221-3/+6
| | | | | | | | | | The patch introduces MakeLibCallOptions struct as suggested by @efriedma on D65497. The struct contain argument flags which will pass to makeLibCall function. The patch should not has any functionality changes. Differential Revision: https://reviews.llvm.org/D65795 llvm-svn: 369622
* [AArch64] Update MTE system register encodingsLuke Cheeseman2019-08-211-5/+5
| | | | | | | | | | The encodings for the system registers TFSRE0_EL1, TFSR_EL1 TFSR_EL2, TFSR_EL3 and TFSR_EL12 have been changed so that they consistently have CRn=5 and CRm=6 as per https://developer.arm.com/docs/ddi0487/latest. Differential Revision: https://reviews.llvm.org/D65442 llvm-svn: 369505
* [AArch64][GlobalISel] Add support for narrowScalar of G_ZEXTAmara Emerson2019-08-211-3/+2
| | | | | | | | We do this by merging the source with the high bits set to 0. Differential Revision: https://reviews.llvm.org/D66181 llvm-svn: 369480
* [AArch64][GlobalISel] Select logical_imm32 and logical_imm64 patternsJessica Paquette2019-08-202-0/+23
| | | | | | | | | | | | | | Add a GlobalISel equivalent for the logical_imm32_XFORM and logical_imm64_XFORM SDNodeXForms in AArch64InstrFormats.td. - Add select-logical-imm.mir, which contains tests for each imported pattern. - Update select-pr32733.mir and select-scalar-shift-imm.mir, since they now select instructions of this form. Differential Revision: https://reviews.llvm.org/D66162 llvm-svn: 369465
* [AArch64][GlobalISel] Select patterns which use shifted register operandsJessica Paquette2019-08-202-0/+89
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This adds GlobalISel equivalents for the following from AArch64InstrFormats: - arith_shifted_reg32 - arith_shifted_reg64 And partial support for - logical_shifted_reg32 - logical_shifted_reg32 The only thing missing for the logical cases is support for rotates. Other than the missing support, the transformation is identical for the arithmetic shifted register and the logical shifted register. Lots of tests here: - Add select-arith-shifted-reg.mir to show that we correctly select add and sub instructions which use this pattern. - Add select-logical-shifted-reg.mir to cover patterns which are not shared between the arithmetic and logical cases. - Update addsub-shifted.ll to show that we correctly fold shifts into adds/subs. - Update eon.ll to show that we can select the eon instruction by folding xors. Differential Revision: https://reviews.llvm.org/D66163 llvm-svn: 369460
* Refactor isPointerOffset (NFC).Evgeniy Stepanov2019-08-191-7/+7
| | | | | | | | | | | | | | | | Summary: Simplify the API using Optional<> and address comments in https://reviews.llvm.org/D66165 Reviewers: vitalybuka Subscribers: hiraditya, llvm-commits, ostannard, pcc Tags: #llvm Differential Revision: https://reviews.llvm.org/D66317 llvm-svn: 369300
* MemTag: stack initializer merging.Evgeniy Stepanov2019-08-193-6/+301
| | | | | | | | | | | | | | | | | | | | | | Summary: MTE provides instructions to update memory tags and data at the same time. This change makes use of those to generate more compact code for stack variable tagging + initialization. We collect memory store and memset instructions following an alloca or a lifetime.start call, and replace them with the corresponding MTE intrinsics. Since the intrinsics work on 16-byte aligned chunks, the stored values are combined as necessary. Reviewers: pcc, vitalybuka, ostannard Subscribers: srhines, javed.absar, hiraditya, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D66167 llvm-svn: 369297
* [nfc] Silent gcc warningSerge Guelton2019-08-191-3/+2
| | | | llvm-svn: 369266
* Revert Revert [AArch64InstrInfo] Stop getInstSizeInBytes returning non-zero ↵Paul Walker2019-08-171-6/+4
| | | | | | | | for meta instructions. This reverts r369132 (git commit 19301d75f086caae1a495d267f5d0264b225942d) llvm-svn: 369186
* Revert [AArch64InstrInfo] Stop getInstSizeInBytes returning non-zero for ↵Paul Walker2019-08-171-4/+6
| | | | | | | | meta instructions. This reverts r369133 (git commit 2632c677f85cba1ac2aef5d68aaf8af0f5b3c944) llvm-svn: 369185
* [AArch64][GlobalISel] Fix an assertion during G_UNMERGE selection for s128 ↵Amara Emerson2019-08-161-1/+3
| | | | | | types. llvm-svn: 369172
* [AArch64InstrInfo] Stop getInstSizeInBytes returning non-zero for meta ↵Paul Walker2019-08-161-6/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | instructions. Recommit with fixes for mac builders. Summary: AArch64InstrInfo::getInstSizeInBytes is incorrectly treating meta instructions (e.g. CFI_INSTRUCTION) as normal instructions and giving them a size of 4. This results in branch relaxation calculating block sizes wrong. Branch relaxation also considers alignment and thus a single mistake can result in later blocks being incorrectly sized even when they themselves do not contain meta instructions. The net result is we might not relax a branch whose destination is not within range. Reviewers: nickdesaulniers, peter.smith Reviewed By: peter.smith Subscribers: javed.absar, kristof.beyls, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D66337 > llvm-svn: 369111 llvm-svn: 369133
* Revert [AArch64InstrInfo] Stop getInstSizeInBytes returning non-zero for ↵Paul Walker2019-08-161-4/+6
| | | | | | | | meta instructions. This reverts r369111 (git commit 3ccee5f7c4087ed119dbeba537f3df1b048a4dff) llvm-svn: 369132
* Relanding r368987 [AArch64] Change location of frame-record within ↵Sander de Smalen2019-08-163-26/+88
| | | | | | | | | | | | | | | | callee-save area. Changes: There was a condition for `!NeedsFrameRecord` missing in the assert. The assert in question has changed to: + assert((!RPI.isPaired() || !NeedsFrameRecord || RPI.Reg2 != AArch64::FP || + RPI.Reg1 == AArch64::LR) && + "FrameRecord must be allocated together with LR"); This addresses PR43016. llvm-svn: 369122
* [AArch64InstrInfo] Stop getInstSizeInBytes returning non-zero for meta ↵Paul Walker2019-08-161-6/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | instructions. Summary: AArch64InstrInfo::getInstSizeInBytes is incorrectly treating meta instructions (e.g. CFI_INSTRUCTION) as normal instructions and giving them a size of 4. This results in branch relaxation calculating block sizes wrong. Branch relaxation also considers alignment and thus a single mistake can result in later blocks being incorrectly sized even when they themselves do not contain meta instructions. The net result is we might not relax a branch whose destination is not within range. Reviewers: nickdesaulniers, peter.smith Reviewed By: peter.smith Subscribers: javed.absar, kristof.beyls, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D66337 llvm-svn: 369111
* Revert r368987, it caused PR43016.Nico Weber2019-08-163-88/+26
| | | | llvm-svn: 369080
* [SDAG] Minor code cleanup/standardization of atomic accessors [NFC]Philip Reames2019-08-151-2/+2
| | | | llvm-svn: 369057
OpenPOWER on IntegriCloud