summaryrefslogtreecommitdiffstats
path: root/llvm/test/CodeGen/AArch64/GlobalISel
Commit message (Collapse)AuthorAgeFilesLines
...
* [GlobalISel] CSEMIRBuilder: Add support for G_GEPVolkan Keles2019-08-151-0/+51
| | | | | | | | | | | | | | | | | | Summary: This patch adds G_GEP to `shouldCSEOpc` so that it can be CSEd. It also refactors `translateGetElementPtr` by replacing `createGenericVirtualRegister` calls with types. Reviewers: aditya_nandakumar, arsenm, dsanders, paquette, aemerson Reviewed By: aditya_nandakumar Subscribers: wdng, rovka, javed.absar, hiraditya, Petar.Avramovic, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D66316 llvm-svn: 369070
* [AArch64][GlobalISel] Custom selection for s8 load acquire.Amara Emerson2019-08-141-0/+37
| | | | | | | | | Implement this single atomic load instruction so that we can compile stack protector code. Differential Revision: https://reviews.llvm.org/D66245 llvm-svn: 368923
* [AArch64][GlobalISel] RBS: Treat s128s like vectors when unmerging.Amara Emerson2019-08-131-0/+24
| | | | | | | | The destinations should be FPRs (for now). Differential Revision: https://reviews.llvm.org/D66184 llvm-svn: 368775
* Relax opcode checks in test to check for only a number instead of a specific ↵Douglas Yung2019-08-131-5/+5
| | | | | | number. llvm-svn: 368756
* GlobalISel: Change representation of shuffle masksMatt Arsenault2019-08-136-93/+46
| | | | | | | | | | | | | | | | | | Currently shufflemasks get emitted as any other constant, and you end up with a bunch of virtual registers of G_CONSTANT with a G_BUILD_VECTOR. The AArch64 selector then asserts on anything that doesn't fit this pattern. This isn't an ideal representation, and should avoid legalization and have fewer opportunities for a representational error. Rather than invent a new shuffle mask operand type, similar to what ShuffleVectorSDNode does, just track the original IR Constant mask operand. I don't completely like the idea of adding another link to the IR, but MIR is already quite dependent on IR constants already, and this will allow sharing the shuffle mask utility functions with the IR. llvm-svn: 368704
* [globalisel] Add G_SEXT_INREGDaniel Sanders2019-08-0910-351/+472
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: Targets often have instructions that can sign-extend certain cases faster than the equivalent shift-left/arithmetic-shift-right. Such cases can be identified by matching a shift-left/shift-right pair but there are some issues with this in the context of combines. For example, suppose you can sign-extend 8-bit up to 32-bit with a target extend instruction. %1:_(s32) = G_SHL %0:_(s32), i32 24 # (I've inlined the G_CONSTANT for brevity) %2:_(s32) = G_ASHR %1:_(s32), i32 24 %3:_(s32) = G_ASHR %2:_(s32), i32 1 would reasonably combine to: %1:_(s32) = G_SHL %0:_(s32), i32 24 %2:_(s32) = G_ASHR %1:_(s32), i32 25 which no longer matches the special case. If your shifts and extend are equal cost, this would break even as a pair of shifts but if your shift is more expensive than the extend then it's cheaper as: %2:_(s32) = G_SEXT_INREG %0:_(s32), i32 8 %3:_(s32) = G_ASHR %2:_(s32), i32 1 It's possible to match the shift-pair in ISel and emit an extend and ashr. However, this is far from the only way to break this shift pair and make it hard to match the extends. Another example is that with the right known-zeros, this: %1:_(s32) = G_SHL %0:_(s32), i32 24 %2:_(s32) = G_ASHR %1:_(s32), i32 24 %3:_(s32) = G_MUL %2:_(s32), i32 2 can become: %1:_(s32) = G_SHL %0:_(s32), i32 24 %2:_(s32) = G_ASHR %1:_(s32), i32 23 All upstream targets have been configured to lower it to the current G_SHL,G_ASHR pair but will likely want to make it legal in some cases to handle their faster cases. To follow-up: Provide a way to legalize based on the constant. At the moment, I'm thinking that the best way to achieve this is to provide the MI in LegalityQuery but that opens the door to breaking core principles of the legalizer (legality is not context sensitive). That said, it's worth noting that looking at other instructions and acting on that information doesn't violate this principle in itself. It's only a violation if, at the end of legalization, a pass that checks legality without being able to see the context would say an instruction might not be legal. That's a fairly subtle distinction so to give a concrete example, saying %2 in: %1 = G_CONSTANT 16 %2 = G_SEXT_INREG %0, %1 is legal is in violation of that principle if the legality of %2 depends on %1 being constant and/or being 16. However, legalizing to either: %2 = G_SEXT_INREG %0, 16 or: %1 = G_CONSTANT 16 %2:_(s32) = G_SHL %0, %1 %3:_(s32) = G_ASHR %2, %1 depending on whether %1 is constant and 16 does not violate that principle since both outputs are genuinely legal. Reviewers: bogner, aditya_nandakumar, volkan, aemerson, paquette, arsenm Subscribers: sdardis, jvesely, wdng, nhaehnle, rovka, kristof.beyls, javed.absar, hiraditya, jrtc27, atanasyan, Petar.Avramovic, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D61289 llvm-svn: 368487
* [GISel]: Add GISelKnownBits analysisAditya Nandakumar2019-08-061-0/+1
| | | | | | | | | | | | | | https://reviews.llvm.org/D65698 This adds a KnownBits analysis pass for GISel. This was done as a pass (compared to static functions) so that we can add other features such as caching queries(within a pass and across passes) in the future. This patch only adds the basic pass boiler plate, and implements a lazy non caching knownbits implementation (ported from SelectionDAG). I've also hooked up the AArch64PreLegalizerCombiner pass to use this - there should be no compile time regression as the analysis is lazy. llvm-svn: 368065
* AArch64: bail instead of asserting on unexpected type in G_CONSTANT 0.Tim Northover2019-08-061-0/+25
| | | | llvm-svn: 368031
* AArch64: use xzr/wzr for constant 0 in GlobalISel.Tim Northover2019-08-065-28/+28
| | | | | | | COPYs from xzr and wzr can often be folded away entirely during register allocation, unlike a movz. llvm-svn: 368003
* [AArch64][GlobalISel] Inline tiny memcpy et al at -O0.Amara Emerson2019-08-051-0/+86
| | | | | | | | | | | FastISel already does this since the initial arm64 port was upstreamed, so it seems there are no issues with doing this at -O0 for very small memcpys. Gives a 0.2% geomean code size improvement on CTMark. Differential Revision: https://reviews.llvm.org/D65758 llvm-svn: 367919
* Re-commit "[GlobalISel] Add legalization support for non-power-2 loads and ↵Amara Emerson2019-08-022-20/+50
| | | | | | | | | | stores"" This is an old commit that exposed a bug in the GISel importer, which caused non-truncating stores to be selected for truncating store patterns. Now that's been fixed in r367737 this can go back in. llvm-svn: 367739
* [AArch64][GlobalISel] Eliminate redundant G_ZEXT when the source is ↵Amara Emerson2019-08-022-2/+50
| | | | | | | | | | | | implicitly zext-loaded. These cases can come up when the extending loads combiner doesn't combine a zext(load) to a zextload op, due to some other operation being in between, which then gets simplified at a later stage. Differential Revision: https://reviews.llvm.org/D65360 llvm-svn: 367723
* [AArch64][GlobalISel] Support the neg_addsub_shifted_imm32 patternJessica Paquette2019-08-021-0/+155
| | | | | | | | | | | Add an equivalent ComplexRendererFns function for SelectNegArithImmed. This allows us to select immediate adds of -1 by turning them into subtracts. Update select-binop.mir to show that the pattern works. Differential Revision: https://reviews.llvm.org/D65460 llvm-svn: 367700
* GlobalISel: support swiftself attributeTim Northover2019-08-022-8/+62
| | | | llvm-svn: 367683
* GlobalISel: Add G_ATOMICRMW_{FADD|FSUB}Matt Arsenault2019-07-301-0/+6
| | | | llvm-svn: 367369
* [AArch64][GlobalISel] Implement narrowing of G_SEXT.Amara Emerson2019-07-261-0/+25
| | | | | | | | We need this to narrow a sext to s128. Differential Revision: https://reviews.llvm.org/D65357 llvm-svn: 367164
* [AArch64][GlobalISel] Select @llvm.aarch64.stlxr for 32-bit pointersJessica Paquette2019-07-261-1/+29
| | | | | | | | | | | | | | | | | | | Add partial instruction selection for intrinsics like this: ``` declare i32 @llvm.aarch64.stlxr(i64, i32*) ``` (This only handles the case where a G_ZEXT is feeding the intrinsic.) Also make sure that the added store instruction actually has the memory op from the original G_STORE. Update select-stlxr-intrin.mir and arm64-ldxr-stxr.ll. Differential Revision: https://reviews.llvm.org/D65355 llvm-svn: 367163
* [AArch64][GlobalISel] Fix G_SELECT legalization fallback after r366943.Amara Emerson2019-07-251-12/+14
| | | | | | Changes the order of legalization of G_ICMP suggested by Petar in D65079. llvm-svn: 367060
* [AArch64][GlobalISel] Select immediate modes for ADD when selecting G_GEPJessica Paquette2019-07-242-8/+52
| | | | | | | | | | | | | | | Before, we weren't able to select things like this for G_GEP: add x0, x8, #8 And instead we'd materialize the 8. This teaches GISel to do that. It gives some considerable code size savings on 252.eon-- about 4%! Differential Revision: https://reviews.llvm.org/D65248 llvm-svn: 366959
* [AArch64][GlobalISel] Don't try to use GISel if subtarget doesn't have neon ↵Amara Emerson2019-07-241-0/+13
| | | | | | | | | | | | | | or fp. Throughout the legalizerinfo we currently make the assumption that the target has neon and FP target features available. Fixing it will require a refactor of the whole thing, so until then make sure we fall back. Works around PR42734 Differential Revision: https://reviews.llvm.org/D65244 llvm-svn: 366957
* [AArch64][GlobalISel] Fold G_MUL into XRO load addressing mode when possibleJessica Paquette2019-07-241-0/+176
| | | | | | | | | | | | If we have a G_MUL, and either the LHS or the RHS of that mul is the legal shift value for a load addressing mode, we can fold it into the load. This gives some code size savings on some SPEC tests. The best are around 2% on 300.twolf and 3% on 254.gap. Differential Revision: https://reviews.llvm.org/D65173 llvm-svn: 366954
* [GlobalISel] Support for inlining memcpy, memset and memmove calls.Amara Emerson2019-07-243-0/+487
| | | | | | | | | | | | | This introduces a new family of combiner helper routines that re-use the target specific cost model from SelectionDAG, and generate inline implementations of the memcpy family of intrinsics. The combines are only enabled at optimization levels higher than -O0, and give very substantial performance improvements. Differential Revision: https://reviews.llvm.org/D65167 llvm-svn: 366951
* [AArch64][GlobalISel] Fix a crash during s128 G_ICMP legalization due to ↵Amara Emerson2019-07-241-0/+40
| | | | | | | | | | | r366317. r366317 added a legalization for s128 G_ICMP narrow scalar which tried to hard code the result type of the new legalized G_SELECT. Change this to instead use type of the original G_ICMP result and allow the target to legalize it if necessary later. llvm-svn: 366943
* [AArch64][GlobalISel] Make vector dup optimization look at last elt of ZeroVecJessica Paquette2019-07-241-0/+40
| | | | | | | | | Fix an off-by-one error which made us not look at the last element of the zero vector. This caused a miscompile in 188.ammp. Differential Revision: https://reviews.llvm.org/D65168 llvm-svn: 366930
* [AArch64][GlobalISel] Add support for s128 loads, stores, extracts, truncs.Amara Emerson2019-07-239-266/+126
| | | | | | | | | | | | We need to be able to load and store s128 for memcpy inlining, where we want to generate Q register mem ops. Making these legal also requires that we add some support in other instructions. Regbankselect should also know about these since they have no GPR register class that can hold them, so need special handling to live on the FPR bank. Differential Revision: https://reviews.llvm.org/D65166 llvm-svn: 366857
* [GlobalISel][AArch64] Save a copy on G_SELECT by fixing condition to GPRJessica Paquette2019-07-232-12/+7
| | | | | | | | The condition can never be fed by FPRs, so it should always be on a GPR. Differential Revision: https://reviews.llvm.org/D65157 llvm-svn: 366854
* [GlobalISel][AArch64] Teach GISel to handle shifts in load addressing modesJessica Paquette2019-07-231-0/+242
| | | | | | | | | | | | | | | | | | | | | | | | | When we select the XRO variants of loads, we can pull in very specific shifts (of the size of an element). E.g. ``` ldr x1, [x2, x3, lsl #3] ``` This teaches GISel to handle these when they're coming from shifts specifically. This adds a new addressing mode function, `selectAddrModeShiftedExtendXReg` which recognizes this pattern. This also packs this up with `selectAddrModeRegisterOffset` into `selectAddrModeXRO`. This is intended to be equivalent to `selectAddrModeXRO` in AArch64ISelDAGtoDAG. Also update load-addressing-modes to show that all of the cases here work. Differential Revision: https://reviews.llvm.org/D65119 llvm-svn: 366819
* [GISel]: Attach missing range metadata while translating G_LOADsAditya Nandakumar2019-07-211-2/+9
| | | | | | | | | | https://reviews.llvm.org/D65048 Attach range information to G_LOAD when only defining one register. reviewed by: arsenm llvm-svn: 366656
* [GlobalISel][AArch64] Contract trivial same-size cross-bank copies into G_STOREsJessica Paquette2019-07-201-0/+89
| | | | | | | | | | | | | | | | | | | Sometimes, you can end up with cross-bank copies between same-sized GPRs and FPRs, which feed into G_STOREs. When these copies feed only into stores, they aren't necessary; we can just store using the original register bank. This provides some minor code size savings for some floating point SPEC benchmarks. (Around 0.2% for 453.povray and 450.soplex) This issue doesn't seem to show up due to regbankselect or anything similar. So, this patch introduces an early select function, `contractCrossBankCopyIntoStore` which performs the contraction when possible. The selector then continues normally and selects the correct store opcode, eliminating needless copies along the way. Differential Revision: https://reviews.llvm.org/D65024 llvm-svn: 366625
* [GlobalISel] Translate calls to memcpy et al to G_INTRINSIC_W_SIDE_EFFECTs ↵Amara Emerson2019-07-192-13/+105
| | | | | | | | | | | | | | and legalize later. I plan on adding memcpy optimizations in the GlobalISel pipeline, but we can't do that unless we delay lowering to actual function calls. This patch changes the translator to generate G_INTRINSIC_W_SIDE_EFFECTS for these functions, and then have each target specify that using the new custom legalizer for intrinsics hook that they want it expanded it a libcall. Differential Revision: https://reviews.llvm.org/D64895 llvm-svn: 366516
* [GlobalISel][AArch64] Add support for base register + offset register loadsJessica Paquette2019-07-181-0/+90
| | | | | | | | | | | | | | | | | | | | Add support for folding G_GEPs into loads of the form ``` ldr reg, [base, off] ``` when possible. This can save an add before the load. Currently, this is only supported for loads of 64 bits into 64 bit registers. Add a new addressing mode function, `selectAddrModeRegisterOffset` which performs this folding when it is profitable. Also add a test for addressing modes for G_LOAD. Differential Revision: https://reviews.llvm.org/D64944 llvm-svn: 366503
* [test][AArch64] Relax the opcode tests for FP min/max instructions.Douglas Yung2019-07-121-6/+6
| | | | llvm-svn: 365961
* [AArch64][GlobalISel] Optimize compare and branch cases with G_INTTOPTR and ↵Amara Emerson2019-07-103-39/+113
| | | | | | | | | | | | | | | | | | | | unknown values. Since we have distinct types for pointers and scalars, G_INTTOPTRs can sometimes obstruct attempts to find constant source values. These usually come about when try to do some kind of null pointer check. Teaching getConstantVRegValWithLookThrough about this operation allows the CBZ/CBNZ optimization to catch more cases. This change also improves the case where we can't find a constant source at all. Previously we would emit a cmp, cset and tbnz for that. Now we try to just emit a cmp and conditional branch, saving an instruction. The cumulative code size improvement of this change plus D64354 is 5.5% geomean on arm64 CTMark -O0. Differential Revision: https://reviews.llvm.org/D64377 llvm-svn: 365690
* [GlobalISel][AArch64] Use getOpcodeDef instead of findMIFromRegJessica Paquette2019-07-101-0/+22
| | | | | | | | | | | | | | | | Some minor cleanup. This function in Utils does the same thing as `findMIFromReg`. It also looks through copies, which `findMIFromReg` didn't. Delete `findMIFromReg` and use `getOpcodeDef` instead. This only happens in `tryOptVectorDup` right now. Update opt-shuffle-splat to show that we can look through the copies now, too. Differential Revision: https://reviews.llvm.org/D64520 llvm-svn: 365684
* [GlobalISel][AArch64][NFC] Use getDefIgnoringCopies from Utils where we canJessica Paquette2019-07-101-0/+27
| | | | | | | | | | | | | | | | | There are a few places where we walk over copies throughout AArch64InstructionSelector.cpp. In Utils, there's a function that does exactly this which we can use instead. Note that the utility function works with the case where we run into a COPY from a physical register. We've run into bugs with this a couple times, so using it should defend us from similar future bugs. Also update opt-fold-compare.mir to show that we still handle physical registers properly. Differential Revision: https://reviews.llvm.org/D64513 llvm-svn: 365683
* GlobalISel: Define the full family of FP min/max instructionsMatt Arsenault2019-07-102-0/+106
| | | | llvm-svn: 365657
* [AArch64][GlobalISel] Optimize conditional branches followed by ↵Amara Emerson2019-07-092-3/+83
| | | | | | | | | | | | | | unconditional branches If we have an icmp->brcond->br sequence where the brcond just branches to the next block jumping over the br, while the br takes the false edge, then we can modify the conditional branch to jump to the br's target while inverting the condition of the incoming icmp. This means we can eliminate the br as an unconditional branch to the fallthrough block. Differential Revision: https://reviews.llvm.org/D64354 llvm-svn: 365510
* [AArch64][GlobalISel] Use TST for comparisons when possibleJessica Paquette2019-07-081-2/+207
| | | | | | | | | | | | | | | | | Porting over the part of `emitComparison` in AArch64ISelLowering where we use TST to represent a compare. - Rename `tryOptCMN` to `tryFoldIntegerCompare`, since it now also emits TSTs when possible. - Add a utility function for emitting a TST with register operands. - Rename opt-fold-cmn.mir to opt-fold-compare.mir, since it now also tests the TST fold as well. Differential Revision: https://reviews.llvm.org/D64371 llvm-svn: 365404
* [GlobalISel][AArch64] Use getConstantVRegValWithLookThrough for selectArithImmedJessica Paquette2019-07-032-10/+51
| | | | | | | | | | | Instead of just stopping to see if we have a G_CONSTANT, instead, look through G_TRUNCs, G_SEXTs, and G_ZEXTs. This gives an average ~1.3% code size improvement on CINT2000 at -O3. Differential Revision: https://reviews.llvm.org/D64108 llvm-svn: 365063
* [AArch64][GlobalISel] Overhaul legalization & isel or shifts to select ↵Amara Emerson2019-07-039-52/+423
| | | | | | | | | | | | | | | | | | | | | | | | | immediate forms. There are two main issues preventing us from generating immediate form shifts: 1) We have partial SelectionDAG imported support for G_ASHR and G_LSHR shift immediate forms, but they currently don't work because the amount type is expected to be an s64 constant, but we only legalize them to have homogenous types. To deal with this, first we introduce a custom legalizer to *only* custom legalize s32 shifts which have a constant operand into a s64. There is also an additional artifact combiner to fold zexts(g_constant) to a larger G_CONSTANT if it's legal, a counterpart to the anyext version committed in an earlier patch. 2) For G_SHL the importer can't cope with the pattern. For this I introduced an early selection phase in the arm64 selector to select these forms manually before the tablegen selector pessimizes it to a register-register variant. Differential Revision: https://reviews.llvm.org/D63910 llvm-svn: 364994
* [AArch64][GlobalISel] Teach tryOptSelect to handle G_ICMPJessica Paquette2019-07-023-26/+73
| | | | | | | | | | | | | | | | | | | | This teaches `tryOptSelect` to handle folding G_ICMP, and removes the requirement that the G_SELECT we're dealing with is floating point. Some refactoring to make this work nicely as well: - Factor out the scalar case from the selection code for G_ICMP into `emitIntegerCompare`. - Make `tryOptCMN` return a MachineInstr* instead of a bool. - Make `tryOptCMN` not modify the instruction being selected. - Factor out the CMN emission into `emitCMN` for readability. By doing this this way, we can get all of the compare selection optimizations in select emission. Differential Revision: https://reviews.llvm.org/D64084 llvm-svn: 364961
* GlobalISel: Add G_FENCEMatt Arsenault2019-07-021-0/+3
| | | | | | | The pattern importer is for some reason emitting checks for G_CONSTANT for the immediate operands. llvm-svn: 364926
* [GlobalISel][IRTranslator] Fix some PHI bugs related to jump tables when ↵Amara Emerson2019-06-271-0/+1141
| | | | | | | | | | | | optimizations are used. The new switch lowering code that tries to generate jump tables and range checks were tested at -O0 on arm64, but on -O3 the generic switch lowering code goes to town on trying to generate optimized lowerings, e.g. multiple jump tables, range checks etc. This exposed bugs in the way PHI nodes are handled because the CFG looks even stranger after all of this is done. llvm-svn: 364613
* [GlobalISel] Accept multiple vregs for lowerCall's argsDiana Picus2019-06-273-36/+10
| | | | | | | | | | | | | | | | | | | | | | | | Change the interface of CallLowering::lowerCall to accept several virtual registers for each argument, instead of just one. This is a follow-up to D46018. CallLowering::lowerReturn was similarly refactored in D49660 and lowerFormalArguments in D63549. With this change, we no longer pack the virtual registers generated for aggregates into one big lump before delegating to the target. Therefore, the target can decide itself whether it wants to handle them as separate pieces or use one big register. ARM and AArch64 have been updated to use the passed in virtual registers directly, which means we no longer need to generate so many merge/extract instructions. NFCI for AMDGPU, Mips and X86. Differential Revision: https://reviews.llvm.org/D63551 llvm-svn: 364512
* [GlobalISel] Accept multiple vregs for lowerCall's resultDiana Picus2019-06-271-2/+1
| | | | | | | | | | | | | | | | | | | | | | | | Change the interface of CallLowering::lowerCall to accept several virtual registers for the call result, instead of just one. This is a follow-up to D46018. CallLowering::lowerReturn was similarly refactored in D49660 and lowerFormalArguments in D63549. With this change, we no longer pack the virtual registers generated for aggregates into one big lump before delegating to the target. Therefore, the target can decide itself whether it wants to handle them as separate pieces or use one big register. ARM and AArch64 have been updated to use the passed in virtual registers directly, which means we no longer need to generate so many merge/extract instructions. NFCI for AMDGPU, Mips and X86. Differential Revision: https://reviews.llvm.org/D63550 llvm-svn: 364511
* [GlobalISel] Accept multiple vregs in lowerFormalArgsDiana Picus2019-06-272-12/+15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Change the interface of CallLowering::lowerFormalArguments to accept several virtual registers for each formal argument, instead of just one. This is a follow-up to D46018. CallLowering::lowerReturn was similarly refactored in D49660. lowerCall will be refactored in the same way in follow-up patches. With this change, we forward the virtual registers generated for aggregates to CallLowering. Therefore, the target can decide itself whether it wants to handle them as separate pieces or use one big register. We also copy the pack/unpackRegs helpers to CallLowering to facilitate this. ARM and AArch64 have been updated to use the passed in virtual registers directly, which means we no longer need to generate so many merge/extract instructions. AArch64 seems to have had a bug when lowering e.g. [1 x i8*], which was put into a s64 instead of a p0. Added a test-case which illustrates the problem more clearly (it crashes without this patch) and fixed the existing test-case to expect p0. AMDGPU has been updated to unpack into the virtual registers for kernels. I think the other code paths fall back for aggregates, so this should be NFC. Mips doesn't support aggregates yet, so it's also NFC. x86 seems to have code for dealing with aggregates, but I couldn't find the tests for it, so I just added a fallback to DAGISel if we get more than one virtual register for an argument. Differential Revision: https://reviews.llvm.org/D63549 llvm-svn: 364510
* [AArch64][GlobalISel] Implement selection support for the new G_JUMP_TABLE ↵Amara Emerson2019-06-212-1/+126
| | | | | | | | | | and G_BRJT ops. With this we can now fully code generate jump tables, which is important for code size. Differential Revision: https://reviews.llvm.org/D63223 llvm-svn: 364086
* [GlobalISel][IRTranslator] Change switch table translation to generate jump ↵Amara Emerson2019-06-212-123/+273
| | | | | | | | | | | | | | | | | | | | | | | | | | tables and range checks. This change makes use of the newly refactored SwitchLoweringUtils code from SelectionDAG to in order to generate jump tables and range checks where appropriate. Much of this code is ported from SDAG with some modifications. We generate G_JUMP_TABLE and G_BRJT instructions when JT opportunities are found. This means that targets which previously relied on the naive one MBB per case stmt translation will now start falling back until they add support for the new opcodes. For range checks, we don't generate any previously unused operations. This just recognizes contiguous ranges of case values and generates a single block per range. Single case value blocks are just a special case of ranges so we get that support almost for free. There are still some optimizations missing that I haven't ported over, and bit-tests are also unimplemented. This patch series is already complex enough. Actual arm64 support for selection of jump tables is coming in a later patch. Differential Revision: https://reviews.llvm.org/D63169 llvm-svn: 364085
* [AArch64][GlobalISel] Make s8 and s16 G_CONSTANTs legal.Amara Emerson2019-06-2111-142/+156
| | | | | | | | | | | | | | | | | | | | | We sometimes get poor code size because constants of types < 32b are legalized as 32 bit G_CONSTANTs with a truncate to fit. This works but means that the localizer can no longer sink them (although it's possible to extend it to do so). On AArch64 however s8 and s16 constants can be selected in the same way as s32 constants, with a mov pseudo into a W register. If we make s8 and s16 constants legal then we can avoid unnecessary truncates, they can be CSE'd, and the localizer can sink them as normal. There is a caveat: if the user of a smaller constant has to widen the sources, we end up with an anyext of the smaller typed G_CONSTANT. This can cause regressions because of the additional extend and missed pattern matching. To remedy this, there's a new artifact combiner to generate the wider G_CONSTANT if it's legal for the target. Differential Revision: https://reviews.llvm.org/D63587 llvm-svn: 364075
* [GlobalISel][Localizer] Allow localization of G_INTTOPTR and chains of ↵Amara Emerson2019-06-211-1/+63
| | | | | | | | | | | | | | | | | | | | instructions. G_INTTOPTR can prevent the localizer from moving G_CONSTANTs, but since it's essentially a side effect free cast instruction we can remat both instructions. This patch changes the localizer to enable localization of the chains by iterating over the entry block instructions in reverse order. That way, uses will localized first, and then the defs are free to be localized as well. This also changes the previous SmallPtrSet of localized instructions to use a SetVector instead. We're dealing with pointers and need deterministic iteration order. Overall, this change improves ARM64 -O0 CTMark code size by around 0.7% geomean. Differential Revision: https://reviews.llvm.org/D63630 llvm-svn: 364001
OpenPOWER on IntegriCloud