summaryrefslogtreecommitdiffstats
path: root/llvm/test/CodeGen/ARM
Commit message (Collapse)AuthorAgeFilesLines
* [ARM] Only produce qadd8b under hasV6OpsDavid Green2020-05-191-0/+1
| | | | | | | | | | | | | | When compiling for a arm5te cpu from clang, the +dsp attribute is set. This meant we could try and generate qadd8 instructions where we would end up having no pattern. I've changed the condition here to be hasV6Ops && hasDSP, which is what other parts of ARMISelLowering seem to use for similar instructions. Fixed PR45677. Differential Revision: https://reviews.llvm.org/D78877 (cherry picked from commit 8807139026b64ac40163bb255dad38a1d8054f08)
* [MachineSink] Fix for breaking phi edges with instructions with multiple defsDavid Green2020-05-071-0/+56
| | | | | | | | | | | | | BreakPHIEdge would be set based on whether the instruction needs to insert a new critical edge to allow sinking into a block where the uses are PHI nodes. But for instructions with multiple defs it would be reset on the second def, allowing the instruciton to sink where it should not. Fixes PR44981 Differential Revision: https://reviews.llvm.org/D78087 (cherry picked from commit 44c4ba34d001dcf538d7396007b5611d6f697f86)
* Don't generate libcalls for wide shift on Windows ARM (PR42711)Hans Wennborg2020-02-251-1/+7
| | | | | | | The previous patch (cff90f07cb5cc3c3bc58277926103af31caef308) didn't cover ARM. (cherry picked from commit decd021facba804b57e8d80b6159c987d3261ab8)
* [FPEnv][ARM] Don't call mutateStrictFPToFP when loweringJohn Brawn2020-02-181-0/+45
| | | | | | | | | | | | mutateStrictFPToFP can delete the node and replace it with another with the same value which can later cause problems, and returning the result of mutateStrictFPToFP doesn't work because SelectionDAGLegalize expects that the returned value has the same number of results as the original. Instead handle things by doing the mutation manually. Differential Revision: https://reviews.llvm.org/D74726 (cherry picked from commit 594a89f7270da74c89f2321432bc6a7135773fa5)
* [ARM] Fix infinite loop when lowering STRICT_FP_EXTENDJohn Brawn2020-02-181-19/+39
| | | | | | | | | | | | | | | | If the target has FP64 but not FP16 then we have custom lowering for FP_EXTEND and STRICT_FP_EXTEND with type f64. However if the extend is from f32 to f64 the current implementation will cause in infinite loop for STRICT_FP_EXTEND due to emitting a merge_values of the original node which after replacement becomes a merge_values of itself. Fix this by not doing anything for f32 to f64 extend when we have FP64, though for STRICT_FP_EXTEND we have to do the strict-to-nonstrict mutation as that doesn't happen automatically for opcodes with custom lowering. Differential Revision: https://reviews.llvm.org/D74559 (cherry picked from commit 0ec57972967dfb43fc022c2e3788be041d1db730)
* [FPEnv][ARM] Add lowering of STRICT_FSETCC and STRICT_FSETCCSJohn Brawn2020-02-181-8/+451
| | | | | | | | | | | | | | | These can be lowered to code sequences using CMPFP and CMPFPE which then get selected to VCMP and VCMPE. The implementation isn't fully correct, as the chain operand isn't handled correctly, but resolving that looks like it would involve changes around FPSCR-handling instructions and how the FPSCR is modelled. The fp-intrinsics test was already testing some of this but as the entire test was being XFAILed it wasn't noticed. Un-XFAIL the test and instead leave the cases where we aren't generating the right instruction sequences as FIXME. Differential Revision: https://reviews.llvm.org/D73194 (cherry picked from commit b37d59353f699e99f139a9227a6a69964ef4b132)
* Revert "[DebugInfo] Remove some users of DBG_VALUEs IsIndirect field"Jeremy Morse2020-02-121-1/+1
| | | | | | | | | | | | | | | | | | | | | | | This reverts commit ed29dbaafa49bb8c9039a35f768244c394411fea. I'm backing out D68945, which as the discussion for D73526 shows, doesn't seem to handle the -O0 path through the codegen backend correctly. I'll reland the patch when a fix is worked out, apologies for all the churn. The two parent commits are part of this revert too. Conflicts: llvm/lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp llvm/test/DebugInfo/X86/dbg-addr-dse.ll SelectionDAGBuilder conflict is due to a nearby change in e39e2b4a79c6 that's technically unrelated. dbg-addr-dse.ll conflicted because 41206b61e30c (legitimately) changes the order of two lines. There are further modifications to dbg-value-func-arg.ll: it landed after the patch being reverted, and I've converted indirection to be represented by the isIndirect field rather than DW_OP_deref. (cherry picked from commit 6531a78ac4b5b229bce272706593a0bc873877d7)
* Revert "[ARM] Improve codegen of volatile load/store of i64"Victor Campos2020-02-081-180/+0
| | | | This reverts commit 60e0120c913dd1d4bfe33769e1f000a076249a42.
* [ARM] Expand vector reduction intrinsics on soft floatNikita Popov2020-02-051-0/+63
| | | | | | | | | | | | Followup to D73135. If the target doesn't have hard float (default for ARM), then we assert when trying to soften the result of vector reduction intrinsics. This patch marks these for expansion as well. (A bit odd to use vectors on a target without hard float ... but that's where you end up if you expose target-independent vector types.) Differential Revision: https://reviews.llvm.org/D73854 (cherry picked from commit 1cc4f8d17247cd9be88addd75d060f9321b6f8b0)
* [AArch64][ARM] Always expand ordered vector reductions (PR44600)Nikita Popov2020-02-052-0/+332
| | | | | | | | | | | | | | | | | | fadd/fmul reductions without reassoc are lowered to VECREDUCE_STRICT_FADD/FMUL nodes, which don't have legalization support. Until that is in place, expand these intrinsics on ARM and AArch64. Other targets always expand the vector reduction intrinsics. Additionally expand fmax/fmin reductions without nonan flag on AArch64, as the backend asserts that the flag is present when lowering VECREDUCE_FMIN/FMAX. This fixes https://bugs.llvm.org/show_bug.cgi?id=44600. Differential Revision: https://reviews.llvm.org/D73135 (cherry picked from commit 70d345e687caba4ac1f95655c6924dfa91e0083f)
* [CodeGen] Move fentry-insert, xray-instrumentation and patchable-function ↵Fangrui Song2020-01-241-3/+3
| | | | | | | | | | | | | | | before addPreEmitPass() This intention is to move patchable-function before aarch64-branch-targets (configured in AArch64PassConfig::addPreEmitPass) so that we emit BTI before NOPs (see https://gcc.gnu.org/bugzilla/show_bug.cgi?id=92424). This also allows addPreEmitPass() passes to know the precise instruction sizes if they want. Tried x86-64 Debug/Release builds of ccls with -fxray-instrument -fxray-instruction-threshold=1. No output difference with this commit and the previous commit. (cherry picked from commit 9a24488cb67a90f889529987275c5e411ce01dda)
* [ARM][Thumb2] Fix ADD/SUB invalid writes to SPDiogo Sampaio2020-01-142-5/+5
| | | | | | | | | | | | | | | | | | | | Summary: This patch fixes pr23772 [ARM] r226200 can emit illegal thumb2 instruction: "sub sp, r12, #80". The violation was that SUB and ADD (reg, immediate) instructions can only write to SP if the source register is also SP. So the above instructions was unpredictable. To enforce that the instruction t2(ADD|SUB)ri does not write to SP we now enforce the destination register to be rGPR (That exclude PC and SP). Different than the ARM specification, that defines one instruction that can read from SP, and one that can't, here we inserted one that can't write to SP, and other that can only write to SP as to reuse most of the hard-coded size optimizations. When performing this change, it uncovered that emitting Thumb2 Reg plus Immediate could not emit all variants of ADD SP, SP #imm instructions before so it was refactored to be able to. (see test/CodeGen/Thumb2/mve-stacksplot.mir where we use a subw sp, sp, Imm12 variant ) It also uncovered a disassembly issue of adr.w instructions, that were only written as SUBW instructions (see llvm/test/MC/Disassembler/ARM/thumb2.txt). Reviewers: eli.friedman, dmgreen, carwil, olista01, efriedma, andreadb Reviewed By: efriedma Subscribers: gbedwell, john.brawn, efriedma, ostannard, kristof.beyls, hiraditya, dmgreen, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D70680
* [LegalizeTypes] Add SoftenFloatResult support for ↵Andrew Wei2020-01-141-0/+18
| | | | | | | | | STRICT_SINT_TO_FP/STRICT_UINT_TO_FP Some target like arm/riscv with soft-float will have compiling crash when using -fno-unsafe-math-optimization option. This patch will add the missing strict FP support to SoftenFloatRes_XINT_TO_FP. Differential Revision: https://reviews.llvm.org/D72277
* [SelectionDAG] ComputeKnownBits - minimum leading/trailing zero bits in ↵Simon Pilgrim2020-01-132-75/+16
| | | | | | | | | | LSHR/SHL (PR44526) As detailed in https://blog.regehr.org/archives/1709 we don't make use of the known leading/trailing zeros for shifted values in cases where we don't know the shift amount value. This patch adds support to SelectionDAG::ComputeKnownBits to use KnownBits::countMinTrailingZeros and countMinLeadingZeros to set the minimum guaranteed leading/trailing known zero bits. Differential Revision: https://reviews.llvm.org/D72573
* Add support for __declspec(guard(nocf))Andrew Paverd2020-01-101-11/+11
| | | | | | | | | | | | | | | | | | Summary: Avoid using the `nocf_check` attribute with Control Flow Guard. Instead, use a new `"guard_nocf"` function attribute to indicate that checks should not be added on indirect calls within that function. Add support for `__declspec(guard(nocf))` following the same syntax as MSVC. Reviewers: rnk, dmajor, pcc, hans, aaron.ballman Reviewed By: aaron.ballman Subscribers: aaron.ballman, tomrittervg, hiraditya, cfe-commits, llvm-commits Tags: #clang, #llvm Differential Revision: https://reviews.llvm.org/D72167
* Reverting, broke some bots. Need further investigation.Diogo Sampaio2020-01-102-5/+5
| | | | | | | | Summary: This reverts commit 8c12769f3046029e2a9b4e48e1645b1a77d28650. Reviewers: Subscribers:
* [ARM][Thumb2] Fix ADD/SUB invalid writes to SPDiogo Sampaio2020-01-102-5/+5
| | | | | | | | | | | | | | | | | | | | Summary: This patch fixes pr23772 [ARM] r226200 can emit illegal thumb2 instruction: "sub sp, r12, #80". The violation was that SUB and ADD (reg, immediate) instructions can only write to SP if the source register is also SP. So the above instructions was unpredictable. To enforce that the instruction t2(ADD|SUB)ri does not write to SP we now enforce the destination register to be rGPR (That exclude PC and SP). Different than the ARM specification, that defines one instruction that can read from SP, and one that can't, here we inserted one that can't write to SP, and other that can only write to SP as to reuse most of the hard-coded size optimizations. When performing this change, it uncovered that emitting Thumb2 Reg plus Immediate could not emit all variants of ADD SP, SP #imm instructions before so it was refactored to be able to. (see test/CodeGen/Thumb2/mve-stacksplot.mir where we use a subw sp, sp, Imm12 variant ) It also uncovered a disassembly issue of adr.w instructions, that were only written as SUBW instructions (see llvm/test/MC/Disassembler/ARM/thumb2.txt). Reviewers: eli.friedman, dmgreen, carwil, olista01, efriedma Reviewed By: efriedma Subscribers: john.brawn, efriedma, ostannard, kristof.beyls, hiraditya, dmgreen, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D70680
* [ARM][MVE] MVE-I should not be disabled by -mfpu=noneMomchil Velikov2020-01-091-1/+1
| | | | | | | | | | | | | | | Architecturally, it's allowed to have MVE-I without an FPU, thus -mfpu=none should not disable MVE-I, or moves to/from FP-registers. This patch removes `+/-fpregs` from features unconditionally added to target feature list, depending on FPU and moves the logic to Clang driver, where the negative form (`-fpregs`) is conditionally added to the target features list for the cases of `-mfloat-abi=soft`, or `-mfpu=none` without either `+mve` or `+mve.fp`. Only the negative form is added by the driver, the positive one is derived from other features in the backend. Differential Revision: https://reviews.llvm.org/D71843
* [ARM][MVE] Enable masked gathers from vector of pointersAnna Welker2020-01-081-0/+1
| | | | | | | | Adds a pass to the ARM backend that takes a v4i32 gather and transforms it into a call to MVE's masked gather intrinsics. Differential Revision: https://reviews.llvm.org/D71743
* [ARM] Regenerate bfi.ll test casesSimon Pilgrim2020-01-071-27/+74
|
* [ARM][MVE] VPT Blocks: findVCMPToFoldIntoVPSSjoerd Meijer2020-01-071-0/+1
| | | | | | | | | | | | | | | | | | | This is a recommit of D71330, but with a few things fixed and changed: 1) ReachingDefAnalysis: this was not running with optnone as it was checking skipFunction(), which other analysis passes don't do. I guess this is a copy-paste from a codegen pass. 2) VPTBlockPass: here I've added skipFunction(), because like most/all optimisations, we don't want to run this with optnone. This fixes the issues with the initial/previous commit: the VPTBlockPass was running with optnone, but ReachingDefAnalysis wasn't, and so VPTBlockPass was crashing querying ReachingDefAnalysis. I've added test case mve-vpt-block-optnone.mir to check that we don't run VPTBlock with optnone. Differential Revision: https://reviews.llvm.org/D71470
* [ARM] Improve codegen of volatile load/store of i64Victor Campos2020-01-071-0/+180
| | | | | | | | | | | | | | | | | | Summary: Instead of generating two i32 instructions for each load or store of a volatile i64 value (two LDRs or STRs), now emit LDRD/STRD. These improvements cover architectures implementing ARMv5TE or Thumb-2. Reviewers: dmgreen, efriedma, john.brawn, nickdesaulniers Reviewed By: efriedma, nickdesaulniers Subscribers: nickdesaulniers, vvereschaka, kristof.beyls, hiraditya, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D70072
* [ARM] Use correct TRAP opcode for thumb in FastISelDavid Green2020-01-061-1/+1
| | | | | | | | | We were previously unconditionally using the ARM::TRAP opcode, even under Thumb. My understanding is that these are essentially the same thing (they both result in a trap under Thumb), but the ARM::TRAP opcode is marked as requiring IsARM, so it is more correct to use ARM::tTRAP. Differential Revision: https://reviews.llvm.org/D72075
* [ARM] Use isFMAFasterThanFMulAndFAdd for scalars as well as MVE vectorsDavid Green2020-01-053-5/+5
| | | | | | | | | | | This adds extra scalar handling to isFMAFasterThanFMulAndFAdd, allowing the target independent code to handle more folds in more situations (for example if the fast math flags are present, but the global AllowFPOpFusion option isnt). It also splits apart the HasSlowFPVMLx into HasSlowFPVFMx, to allow VFMA and VMLA to be controlled separately if needed. Differential Revision: https://reviews.llvm.org/D72139
* [ARM] Fill in FP16 FMA patternsDavid Green2020-01-051-64/+44
| | | | | | This adds fp16 variants of all the fma patterns in the ARM backend. Differential Revision: https://reviews.llvm.org/D72138
* [ARM] Add and update FMA tests. NFCDavid Green2020-01-053-31/+480
|
* [TargetLowering] SimplifyDemandedBits - call SimplifyMultipleUseDemandedBits ↵Simon Pilgrim2020-01-041-6/+3
| | | | | | | | | | | | | | for ISD::EXTRACT_VECTOR_ELT (REAPPLIED) This patch attempts to peek through vectors based on the demanded bits/elt of a particular ISD::EXTRACT_VECTOR_ELT node, allowing us to avoid dependencies on ops that have no impact on the extract. In particular this helps remove some unnecessary scalar->vector->scalar patterns. The wasm shift patterns are annoying - @tlively has indicated that the wasm vector shift codegen are to be refactored in the near-term and isn't considered a major issue. Reapplied after reversion at rL368660 due to PR42982 which was fixed at rGca7fdd41bda0. Differential Revision: https://reviews.llvm.org/D65887
* [DAGCombine] Initialize the default operation action for SIGN_EXTEND_INREG ↵QingShan Zhang2020-01-031-7/+7
| | | | | | | | | | | for vector type as 'expand' instead of 'legal' For now, we didn't set the default operation action for SIGN_EXTEND_INREG for vector type, which is 0 by default, that is legal. However, most target didn't have native instructions to support this opcode. It should be set as expand by default, as what we did for ANY_EXTEND_VECTOR_INREG. Differential Revision: https://reviews.llvm.org/D70000
* [ARM] Update ifcvt test target triples and opcodes. NFCDavid Green2020-01-028-41/+41
| | | | | | Some of the instructions in these tests were technically invalid combinations (using ARM opcodes in Thumb mode, for example). Update the targets and the instructions used to be more correct.
* Migrate function attribute "no-frame-pointer-elim"="false" to ↵Fangrui Song2019-12-2414-20/+20
| | | | "frame-pointer"="none" as cleanups after D56351
* Migrate function attribute "no-frame-pointer-elim" to "frame-pointer"="all" ↵Fangrui Song2019-12-2436-105/+105
| | | | as cleanups after D56351
* [DAGCombine] visitEXTRACT_SUBVECTOR - 'little to big' ↵Sanjay Patel2019-12-231-2/+2
| | | | | | | | | | | | | | | | | | | extract_subvector(bitcast()) support This moves the X86 specific transform from rL364407 into DAGCombiner to generically handle 'little to big' cases (for example: extract_subvector(v2i64 bitcast(v16i8))). This allows us to remove both the x86 implementation and the aarch64 bitcast(extract_subvector(bitcast())) combine. Earlier patches that dealt with regressions initially exposed by this patch: rG5e5e99c041e4 rG0b38af89e2c0 Patch by: @RKSimon (Simon Pilgrim) Differential Revision: https://reviews.llvm.org/D63815
* [ARM] [Windows] Use COFF stubs for calls to extern_weak functionsMartin Storsjö2019-12-231-3/+6
| | | | | | | | | | | | | As the extern_weak target might be missing, resolving to the absolute address zero, we can't use the normal direct PC-relative branch instructions (as that would result in relocations out of range). Instead check the shouldAssumeDSOLocal method and load the address from a COFF stub. This matches what was done for X86 in 6bf108d77a3c. Differential Revision: https://reviews.llvm.org/D71720
* Revert "[ARM] Improve codegen of volatile load/store of i64"Victor Campos2019-12-201-153/+0
| | | | This reverts commit bbcf1c3496ce2bd1ed87e8fb15ad896e279633ce.
* [ARM] Improve codegen of volatile load/store of i64Victor Campos2019-12-191-0/+153
| | | | | | | | | | | | | | | | | | Summary: Instead of generating two i32 instructions for each load or store of a volatile i64 value (two LDRs or STRs), now emit LDRD/STRD. These improvements cover architectures implementing ARMv5TE or Thumb-2. Reviewers: dmgreen, efriedma, john.brawn Reviewed By: efriedma Subscribers: kristof.beyls, hiraditya, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D70072
* Reapply: [DebugInfo] Correctly handle salvaged casts and split fragments at ISelstozer2019-12-181-0/+72
| | | | | | | | This reverts commit 1f3dd83cc1f2b8f72b9d59e2b4221b12fb7f9a95, reapplying commit bb1b0bc4e57428ce364d3d6c075ff03cb8973462. The original commit failed on some builds seemingly due to the use of a bracketed constructor with an std::array, i.e. `std::array<> arr({...})`.
* Revert "[DebugInfo] Correctly handle salvaged casts and split fragments at ISel"stozer2019-12-181-72/+0
| | | | | | Reverted due to build failure on windows bots. This reverts commit bb1b0bc4e57428ce364d3d6c075ff03cb8973462.
* [DebugInfo] Correctly handle salvaged casts and split fragments at ISelstozer2019-12-181-0/+72
| | | | | | | | | | | | | | | Previously, LLVM had no functional way of performing casts inside of a DIExpression(), which made salvaging cast instructions other than Noop casts impossible. This patch enables the salvaging of casts by using the DW_OP_LLVM_convert operator for SExt and Trunc instructions. There is another issue which is exposed by this fix, in which fragment DIExpressions (which are preserved more readily by this patch) for values that must be split across registers in ISel trigger an assertion, as the 'split' fragments extend beyond the bounds of the fragment DIExpression causing an error. This patch also fixes this issue by checking the fragment status of DIExpressions which are to be split, and dropping fragments that are invalid.
* [FPEnv] Remove unnecessary rounding mode argument for constrained intrinsicsUlrich Weigand2019-12-171-24/+24
| | | | | | | | | | | | | | | | | | | The following intrinsics currently carry a rounding mode metadata argument: llvm.experimental.constrained.minnum llvm.experimental.constrained.maxnum llvm.experimental.constrained.ceil llvm.experimental.constrained.floor llvm.experimental.constrained.round llvm.experimental.constrained.trunc This is not useful since the semantics of those intrinsics do not in any way depend on the rounding mode. In similar cases, other constrained intrinsics do not have the rounding mode argument. Remove it here as well. Reviewed By: craig.topper Differential Revision: https://reviews.llvm.org/D71218
* [PGO][PGSO] Enable size optimizations in code gen / target passes for cold code.Hiroshi Yamauchi2019-12-131-0/+25
| | | | | | | | | | | | Summary: Split off of D67120. Reviewers: davidxl Subscribers: hiraditya, asb, rbar, johnrusso, simoncook, sabuasal, niosHD, jrtc27, MaskRay, zzheng, edward-jones, rogfer01, MartinMosbeck, brucehoult, the_o, PkmX, jocewei, lenary, s.egerton, pzheng, sameer.abuasal, apazos, luismarques, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D71288
* [ARM] Fix in ICE when retrieving the number of micro-ops for vlldm/vlstmMomchil Velikov2019-12-131-0/+72
| | | | | | | | | | | | | The big switch in `ARMBaseInstrInfo::getNumMicroOps` is missing cases for `VLLDM` and `VLSTM`, which are currently defined with itineraries having a dynamic count of micro-ops. Assuming an optimistic case in which these instruction do not actually perform loads or stores, and with the idea that Armv8-m cores are supposed to use the new style scheduling models, this patch just sets the itinerary for those two instructions to `NoItinerary`. Differential Revision: https://reviews.llvm.org/D71266
* Revert "[ARM][MVE] findVCMPToFoldIntoVPS. NFC."Sjoerd Meijer2019-12-131-1/+0
| | | | | | | | This reverts commit 9468e3334ba54fbb1b209aaec662d7375451fa1f. There's a test that doesn't like this change. The RDA analysis gets invalided by changes in the block, which is not taken into account. Revert while I work on a fix for this.
* [ARM][MVE] findVCMPToFoldIntoVPS. NFC.Sjoerd Meijer2019-12-121-0/+1
| | | | | | | This adds ReachingDefAnalysis (RDA) to the VPTBlock pass, so that we can reimplement findVCMPToFoldIntoVPS with just a few calls to RDA. Differential Revision: https://reviews.llvm.org/D71330
* [NFC][ARM] Add some test triplesSam Parker2019-12-122-181/+773
| | | | Add thumb and thumb2 to a couple of the test files.
* [DebugInfo] Prevent invalid fragments at ISel from dropping debug infostozer2019-12-121-0/+51
| | | | | | | | | | | | | During SelectionDAG, if a value which is associated with a DBG_VALUE needs to be split across multiple registers, the DBG_VALUE will be split into a set of fragment expressions to recreate the original value. If one or more of these fragments cannot be created, they would previously be silently dropped, causing the old debug value to live past its expiry date. This patch fixes this issue by keeping invalid fragments while setting their value as Undef. Differential revision: https://reviews.llvm.org/D70248
* [LegalizeTypes] Bugfixes for big-endian targets when handling BITCASTsMikael Holmen2019-12-101-9/+12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: This fixes PR44135. The special case when we promote a bitcast from a vector to an int needs special handling when we are on a big-endian target. Prior to this fix, for the added vec_to_int we see the following in the SelectionDAG printouts Type-legalized selection DAG: %bb.1 'foo:bb.1' SelectionDAG has 9 nodes: t0: ch = EntryToken t2: v8i16,ch = CopyFromReg t0, Register:v8i16 %0 t17: v4i32 = bitcast t2 t23: i32 = extract_vector_elt t17, Constant:i32<3> t8: ch,glue = CopyToReg t0, Register:i32 $r0, t23 t9: ch = ARMISD::RET_FLAG t8, Register:i32 $r0, t8:1 and I think here the extract_vector_elt is wrong and extracts the value from the wrong index. The program program should return the 32 bits made up of the elements at index 4 and 5 in the vec6 array, but with t23: i32 = extract_vector_elt t17, Constant:i32<3> as far as I can tell, we will extract values that originally didn't even exist in the vec6 vectore. If we would instead extract the element at index 2 we would get the wanted values. With this fix we insert a right shift after the bitcast in DAGTypeLegalizer::PromoteIntRes_BITCAST which then gives us Type-legalized selection DAG: %bb.1 'vec_to_int:bb.1' SelectionDAG has 9 nodes: t0: ch = EntryToken t2: v8i16,ch = CopyFromReg t0, Register:v8i16 %0 t23: v4i32 = bitcast t2 t27: i32 = extract_vector_elt t23, Constant:i32<2> t8: ch,glue = CopyToReg t0, Register:i32 $r0, t27 t9: ch = ARMISD::RET_FLAG t8, Register:i32 $r0, t8:1 So now we get t27: i32 = extract_vector_elt t23, Constant:i32<2> which is what we want. Similarly, the new int_to_vec testcase exposes a bug where we cast the other direction. Then we instead need to add a left shift before the bitcast on big-endian targets for the bits in the input integer to end up at the exptected place in the vector. Reviewers: bogner, spatel, craig.topper, t.p.northover, dmgreen, efriedma, SjoerdMeijer, samparker Reviewed By: efriedma Subscribers: eli.friedman, bjope, kristof.beyls, hiraditya, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D70942
* Add testcases exposing PR44135Mikael Holmen2019-12-101-0/+56
|
* [PGO][PGSO] Instrument the code gen / target passes.Hiroshi Yamauchi2019-12-091-0/+7
| | | | | | | | | | | | | | | | | | | | Summary: Split off of D67120. Add the profile guided size optimization instrumentation / queries in the code gen or target passes. This doesn't enable the size optimizations in those passes yet as they are currently disabled in shouldOptimizeForSize (for non-IR pass queries). A second try after reverted D71072. Reviewers: davidxl Subscribers: hiraditya, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D71149
* Revert "[PGO][PGSO] Instrument the code gen / target passes."Hiroshi Yamauchi2019-12-061-7/+0
| | | | | | This reverts commit 9a0b5e14075a1f42a72eedb66fd4fde7985d37ac. This seems to break buildbots.
* Revert "ARM-Darwin: keep the frame register reserved even if not updated."Alina Sbirlea2019-12-061-15/+0
| | | | | | | | This reverts commit a7d90af1be48234ce583e00fb16e33633d44ae38. This revision came back as the root-cause for crashes in internal ARM-IOS apps. Reproducer in https://bugs.llvm.org/show_bug.cgi?id=44231.
OpenPOWER on IntegriCloud