summaryrefslogtreecommitdiffstats
path: root/llvm/lib/Target
Commit message (Collapse)AuthorAgeFilesLines
* [X86] Fix typo in comment. NFCCraig Topper2017-09-121-1/+1
| | | | llvm-svn: 312990
* Revert r312898 "[ARM] Use ADDCARRY / SUBCARRY"Hans Wennborg2017-09-112-168/+20
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | It caused PR34564. > This is a preparatory step for D34515 and also is being recommitted as its > first version caused PR34045. > > This change: > - makes nodes ISD::ADDCARRY and ISD::SUBCARRY legal for i32 > - lowering is done by first converting the boolean value into the carry flag > using (_, C) ← (ARMISD::ADDC R, -1) and converted back to an integer value > using (R, _) ← (ARMISD::ADDE 0, 0, C). An ARMISD::ADDE between the two > operations does the actual addition. > - for subtraction, given that ISD::SUBCARRY second result is actually a > borrow, we need to invert the value of the second operand and result before > and after using ARMISD::SUBE. We need to invert the carry result of > ARMISD::SUBE to preserve the semantics. > - given that the generic combiner may lower ISD::ADDCARRY and > ISD::SUBCARRYinto ISD::UADDO and ISD::USUBO we need to update their lowering > as well otherwise i64 operations now would require branches. This implies > updating the corresponding test for unsigned. > - add new combiner to remove the redundant conversions from/to carry flags > to/from boolean values (ARMISD::ADDC (ARMISD::ADDE 0, 0, C), -1) → C > - fixes PR34045 > > Differential Revision: https://reviews.llvm.org/D35192 llvm-svn: 312980
* bpf: add " ll" in the LD_IMM64 asmstringYonghong Song2017-09-112-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | This partially revert previous fix in commit f5858045aa0b ("bpf: proper print imm64 expression in inst printer"). In that commit, the original suffix "ll" is removed from LD_IMM64 asmstring. In the customer print method, the "ll" suffix is printed if the rhs is an immediate. For example, "r2 = 5ll" => "r2 = 5ll", and "r3 = varll" => "r3 = var". This has an issue though for assembler. Since assembler relies on asmstring to do pattern matching, it will not be able to distiguish between "mov r2, 5" and "ld_imm64 r2, 5" since both asmstring is "r2 = 5". In such cases, the assembler uses 64bit load for all "r = <val>" asm insts. This patch adds back " ll" suffix for ld_imm64 with one additional space for "#reg = #global_var" case. Signed-off-by: Yonghong Song <yhs@fb.com> Acked-by: Alexei Starovoitov <ast@kernel.org> llvm-svn: 312978
* AMDGPU: Allow coldcc callsMatt Arsenault2017-09-111-0/+2
| | | | llvm-svn: 312936
* [mips][microMIPS] add lapc instructionPetar Jovanovic2017-09-112-0/+6
| | | | | | | | | | Implement LAPC instruction for mips32r6, mips64r6 and micromips32r6. Patch by Milos Stojanovic. Differential Revision: https://reviews.llvm.org/D35984 llvm-svn: 312934
* [AMDGPU] Produce madak and madmk from the two-address passStanislav Mekhanoshin2017-09-111-0/+42
| | | | | | | | | | These two instructions are normally selected, but when the two address pass converts mac into mad we end up with the mad where we could have one of these. Differential Revision: https://reviews.llvm.org/D37389 llvm-svn: 312928
* [X86] Remove portions of r275950 that are no longer needed with i1 not being ↵Craig Topper2017-09-112-38/+8
| | | | | | | | | | | | | | | | | | | a legal type Summary: r275950 added support for turning (trunc (X >> N) to i1) into BT(X, N). But that's no longer necessary now that i1 isn't legal. This patch removes the support for that, but preserves some of the refactorings done in that commit. Reviewers: guyblank, RKSimon, spatel, zvi Reviewed By: RKSimon Subscribers: llvm-commits Differential Revision: https://reviews.llvm.org/D37673 llvm-svn: 312925
* [X86][SSE] Add support for X86ISD::PACKSS to ComputeNumSignBitsForTargetNodeSimon Pilgrim2017-09-111-0/+12
| | | | | | | | Helps improve combineLogicBlendIntoPBLENDV support by allowing us to peek into through PACKSS truncations of vector comparison results. Differential Revision: https://reviews.llvm.org/D37680 llvm-svn: 312916
* [AMDGPU] exp should not be in WQM modeTim Renouf2017-09-111-1/+1
| | | | | | | | | | | | | | | A mrt exp with vm=1 must be in exact (non-WQM) mode, as it also exports the exec mask as the valid mask to determine which pixels to render. This commit marks any exp as needing to be in exact mode. Actually, if there are multiple mrt exps, only one needs to have vm=1, and only that one needs to be in exact mode. But that is an optimization for another day. Differential Revision: https://reviews.llvm.org/D36305 llvm-svn: 312915
* [ARM] Enable the use of SVC anywhere in an IT blockAndre Vieira2017-09-111-3/+4
| | | | | | Differential Revision: https://reviews.llvm.org/D37374 llvm-svn: 312908
* [AVR] Enable the '__do_copy_data' functionDylan McKay2017-09-112-0/+22
| | | | | | | | | | | | | Also enables '__do_clear_bss'. These functions are automaticalled called by the CRT if they are declared. We need these to be called otherwise RAM will start completely uninitialised, even though we need to copy RAM variables from progmem to RAM. llvm-svn: 312905
* [GlobalISel][X86] G_ANYEXT support.Igor Breger2017-09-112-7/+62
| | | | | | | | | | | | | | Summary: G_ANYEXT support Reviewers: zvi, delena Reviewed By: delena Subscribers: rovka, kristof.beyls, llvm-commits Differential Revision: https://reviews.llvm.org/D37675 llvm-svn: 312903
* AMDGPU: trivial comment changeTim Renouf2017-09-111-1/+1
| | | | | | ... to check commit access for new committer. llvm-svn: 312900
* [ARM] Use ADDCARRY / SUBCARRYRoger Ferrer Ibanez2017-09-112-20/+168
| | | | | | | | | | | | | | | | | | | | | | | | | | | This is a preparatory step for D34515 and also is being recommitted as its first version caused PR34045. This change: - makes nodes ISD::ADDCARRY and ISD::SUBCARRY legal for i32 - lowering is done by first converting the boolean value into the carry flag using (_, C) ← (ARMISD::ADDC R, -1) and converted back to an integer value using (R, _) ← (ARMISD::ADDE 0, 0, C). An ARMISD::ADDE between the two operations does the actual addition. - for subtraction, given that ISD::SUBCARRY second result is actually a borrow, we need to invert the value of the second operand and result before and after using ARMISD::SUBE. We need to invert the carry result of ARMISD::SUBE to preserve the semantics. - given that the generic combiner may lower ISD::ADDCARRY and ISD::SUBCARRYinto ISD::UADDO and ISD::USUBO we need to update their lowering as well otherwise i64 operations now would require branches. This implies updating the corresponding test for unsigned. - add new combiner to remove the redundant conversions from/to carry flags to/from boolean values (ARMISD::ADDC (ARMISD::ADDE 0, 0, C), -1) → C - fixes PR34045 Differential Revision: https://reviews.llvm.org/D35192 llvm-svn: 312898
* [X86][SSE] Tidyup + clang-format combineX86ShuffleChain call. NFCI.Simon Pilgrim2017-09-101-3/+2
| | | | llvm-svn: 312887
* [X86][SSE] Move combineTo call out of combineX86ShufflesConstants. NFCI.Simon Pilgrim2017-09-101-11/+13
| | | | | | Move towards making it possible to use the shuffle combines for cases where we don't want to call DCI.CombineTo() with the result. llvm-svn: 312886
* [X86][SSE] Move combineTo call out of combineX86ShuffleChain. NFCI.Simon Pilgrim2017-09-101-74/+46
| | | | | | First step towards making it possible to use the shuffle combines for cases where we don't want to call DCI.CombineTo() with the result. llvm-svn: 312884
* [X86][X86AsmParser] adding const on InlineAsmIdentifierInfo in ↵Coby Tayree2017-09-101-2/+2
| | | | | | CreateMemForInlineAsm. NFC. llvm-svn: 312881
* Revert "adding autoUpgrade support to broadcast[f|i]32x2 intrinsics"Uriel Korach2017-09-101-0/+10
| | | | | | This reverts commit r312879 - An accidental partial commit. llvm-svn: 312880
* adding autoUpgrade support to broadcast[f|i]32x2 intrinsicsUriel Korach2017-09-101-10/+0
| | | | llvm-svn: 312879
* [X86] Don't disable slow INC/DEC if optimizing for sizeCraig Topper2017-09-094-11/+18
| | | | | | | | | | | | | | | | | Summary: Just because INC/DEC is a little slow on some processors doesn't mean we shouldn't prefer it when optimizing for size. This appears to match gcc behavior. Reviewers: chandlerc, zvi, RKSimon, spatel Reviewed By: RKSimon Subscribers: llvm-commits Differential Revision: https://reviews.llvm.org/D37177 llvm-svn: 312866
* [DivRempairs] add a pass to optimize div/rem pairs (PR31028)Sanjay Patel2017-09-092-0/+6
| | | | | | | | | | | | | | | | | | This is intended to be a superset of the functionality from D31037 (EarlyCSE) but implemented as an independent pass, so there's no stretching of scope and feature creep for an existing pass. I also proposed a weaker version of this for SimplifyCFG in D30910. And I initially had almost this same functionality as an addition to CGP in the motivating example of PR31028: https://bugs.llvm.org/show_bug.cgi?id=31028 The advantage of positioning this ahead of SimplifyCFG in the pass pipeline is that it can allow more flattening. But it needs to be after passes (InstCombine) that could sink a div/rem and undo the hoisting that is done here. Decomposing remainder may allow removing some code from the backend (PPC and possibly others). Differential Revision: https://reviews.llvm.org/D37121 llvm-svn: 312862
* [X86] Call removeDeadNode when we're done doing custom isel for mul, div and ↵Craig Topper2017-09-091-0/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | test Summary: Once we've done our custom isel for these nodes, I think we should be calling removeDeadNode to prune them out of the DAG. Table driven isel ultimately either calls morphNodeTo which modifies a node and doesn't leave dead nodes. Or it emits new nodes and then calls removeDeadNode as part of Opc_CompleteMatch. If you run a simple multiply test case like this through llc with -debug you'll see a umul_lohi node get printed as part of the dump for Instruction Selection ends. ``` define i64 @foo(i64 %a, i64 %b) local_unnamed_addr #0 { entry: %conv = zext i64 %a to i128 %conv1 = zext i64 %b to i128 %mul = mul nuw nsw i128 %conv1, %conv %shr = lshr i128 %mul, 64 %conv2 = trunc i128 %shr to i64 ret i64 %conv2 } ``` Reviewers: RKSimon, spatel, zvi, guyblank, niravd Reviewed By: niravd Subscribers: llvm-commits Differential Revision: https://reviews.llvm.org/D37547 llvm-svn: 312857
* [X86] Use ReplaceNode instead of ReplaceUses when converting ↵Craig Topper2017-09-091-1/+1
| | | | | | | | X86ISD::SHRUNKBLEND to ISD::VSELECT during isel. This ensures that the SHRUNKBLEND node gets erased immediately. llvm-svn: 312856
* PPC: Don't select lxv/stxv for insufficiently aligned stack slots.Kyle Butt2017-09-091-1/+11
| | | | | | | | | | | | | | The lxv/stxv instructions require an offset that is 0 % 16. Previously we were selecting lxv/stxv for loads and stores to the stack where the offset from the slot was a multiple of 16, but the stack slot was not 16 or more byte aligned. When the frame gets lowered these transform to r(1|31) + slot + offset. If slot is not aligned, slot + offset may not be 0 % 16. Now we require 16 byte or more alignment for select lxv/stxv to stack slots. Includes a testcase that shows both sufficiently and insufficiently aligned stack slots. llvm-svn: 312843
* [AMDGPU] Remove unused function. NFCI.Davide Italiano2017-09-081-9/+0
| | | | llvm-svn: 312836
* bpf: proper print imm64 expression in inst printerYonghong Song2017-09-082-2/+4
| | | | | | | | | | | | | | Fixed an issue in printImm64Operand where if the value is an expression, print out the expression properly. Currently, it will print r1 = <MCOperand Expr:(tx_port)>ll With the patch, the printout will be r1 = tx_port Suggested-by: Jiong Wang <jiong.wang@netronome.com> Signed-off-by: Yonghong Song <yhs@fb.com> Acked-by: Alexei Starovoitov <ast@kernel.org> llvm-svn: 312833
* AMDGPU: Start using !con operatorMatt Arsenault2017-09-081-14/+12
| | | | | | | | | | | | | We have a lot of operand definition work essentially producing every valid permutation of operands to workaround builiding operand lists based on the instruction features. Apparently tablegen already has a mostly undocumented operator to concat dags which simplies this. Convert one simple place to use this. The BUF instruction definitions have much more complicated logic that can be totally rewritten now. llvm-svn: 312822
* AMDGPU: Recompute scc livenessMatt Arsenault2017-09-081-1/+7
| | | | | | | | The various scalar bit operations set SCC, so one is erased or moved it needs to be recomputed. Not sure why the existing tests don't fail on this. llvm-svn: 312819
* [x86] Fix GCC pedantic warnings about default arguments for lambdas.Chandler Carruth2017-09-081-6/+6
| | | | llvm-svn: 312809
* [SLP] Support for horizontal min/max reduction.Alexey Bataev2017-09-082-0/+149
| | | | | | | | | | | | | SLP vectorizer supports horizontal reductions for Add/FAdd binary operations. Patch adds support for horizontal min/max reductions. Function getReductionCost() is split to getArithmeticReductionCost() for binary operation reductions and getMinMaxReductionCost() for min/max reductions. Patch fixes PR26956. Differential revision: https://reviews.llvm.org/D27846 llvm-svn: 312791
* [XRay][CodeGen][PowerPC] Fix tail exit codegen for XRay in PPCDean Michael Berris2017-09-081-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: This fixes code-gen for XRay in PPC. The regression wasn't caught by codegen tests which we add in this change. What happened was the following: - For tail exits, we used to unconditionally prepend the returns/exits with a pseudo-instruction that gets lowered to the instrumentation sled (and leave the actual return/exit instruction as-is). - Changes to the XRay instrumentation pass caused the tail exits to suddenly also emit the tail exit pseudo-instruction, since the check for whether a return instruction was also a call instruction meant it was a tail exit instruction. - None of the tests caught the regression either due to non-existent tests, or the tests being disabled/removed for continuous breakage. This change re-introduces some of the basic tests and verifies that we're back to a state that allows the back-end to generate appropriate XRay instrumented binaries for PPC in the presence of tail exits. Reviewers: echristo, timshen Subscribers: nemanjai, kbarton, llvm-commits Differential Revision: https://reviews.llvm.org/D37570 llvm-svn: 312772
* [x86] Flesh out the custom ISel for RMW aritmetic ops with used flags toChandler Carruth2017-09-081-2/+32
| | | | | | | | | | | | | | | | cover the bitwise operators. Nothing really exciting here, this just stamps out the rest of the core operations that can RMW memory and set flags. Still not implemented here: ADC, SBB. Those will require more interesting logic to channel the flags *in*, and I'm not currently planning to try to tackle that. It might be interesting for someone who wants to improve our code generation for bignum implementations. Differential Revision: https://reviews.llvm.org/D37141 llvm-svn: 312768
* [x86] Extend the manual ISel of `add` and `sub` with both RMW memoryChandler Carruth2017-09-071-15/+142
| | | | | | | | | | | | | | | | | | | | | | | | | | | | operands and used flags to support matching immediate operands. This is a bit trickier than register operands, and we still want to fall back on a register operands even for things that appear to be "immediates" when they won't actually select into the operation's immediate operand. This also requires us to handle things like selecting `sub` vs. `add` to minimize the number of bits needed to represent the immediate, and picking the shortest immediate encoding. In order to that, we in turn need to scan to make sure that CF isn't used as it will get inverted. The end result seems very nice though, and we're now generating optimal instruction sequences for these patterns IMO. A follow-up patch will further expand this to other operations with RMW memory operands. But handing `add` and `sub` are useful starting points to flesh out the machinery and make sure interesting and complex cases can be handled. Thanks to Craig Topper who provided a few fixes and improvements to this patch in addition to the review! Differential Revision: https://reviews.llvm.org/D37139 llvm-svn: 312764
* Sink some IntrinsicInst.h and Intrinsics.h out of llvm/includeReid Kleckner2017-09-072-0/+2
| | | | | | | Many of these uses can get by with forward declarations. Hopefully this speeds up compilation after adding a single intrinsic. llvm-svn: 312759
* [CUDA] Added rudimentary support for CUDA-9 and sm_70.Artem Belevich2017-09-071-0/+5
| | | | | | | | | | | | | For now CUDA-9 is not included in the list of CUDA versions clang searches for, so the path to CUDA-9 must be explicitly passed via --cuda-path=. On LLVM side NVPTX added sm_70 GPU type which bumps required PTX version to 6.0, but otherwise is equivalent to sm_62 at the moment. Differential Revision: https://reviews.llvm.org/D37576 llvm-svn: 312734
* AMDGPU: Start selecting v_mad_mix_f32Matt Arsenault2017-09-074-5/+105
| | | | llvm-svn: 312732
* AMDGPU: Handle non-temporal loads and storesKonstantin Zhuravlyov2017-09-071-23/+59
| | | | | | Differential Revision: https://reviews.llvm.org/D36862 llvm-svn: 312729
* AMDGPU: Handle more than one memory operand in SIMemoryLegalizerKonstantin Zhuravlyov2017-09-072-58/+145
| | | | | | Differential Revision: https://reviews.llvm.org/D37397 llvm-svn: 312725
* [ARM] Remove redundant vcvt patterns.Benjamin Kramer2017-09-071-14/+0
| | | | | | | | These don't add any value as they're just compositions of existing patterns. However, they can confuse the cost logic in ISel, leading to duplicated vcvt instructions like in PR33199. llvm-svn: 312724
* [X86][LLVM]Expanding Supports lowerInterleavedLoad() in X86InterleavedAccess ↵Michael Zuckerman2017-09-071-20/+193
| | | | | | | | | | | | | | | | | | | | | | | | | | | | (VF{8|16|32} stride 3). This patch expands the support of lowerInterleavedload to {8|16|32}x8i stride 3. LLVM creates suboptimal shuffle code-gen for AVX2. In overall, this patch is a specific fix for the pattern (Strid=3 VF={8|16|32}) and we plan to include the store (deinterleved side). The patch goal is to optimize the following sequence: a0 b0 c0 a1 b1 c1 a2 b2 c2 a3 b3 c3 a4 b4 c4 a5 b5 c5 a6 b6 c6 a7 b7 c7 into a0 a1 a2 a3 a4 a5 a6 a7 b0 b1 b2 b3 b4 b5 b6 b7 c0 c1 c2 c3 c4 c5 c6 c7 Reviewers 1. zvi 2. igor 3. guyblank 4. dorit 5. Ayal llvm-svn: 312722
* [mips] Use RegisterMCAsmBackend to register all MIPS asm backends. NFCSimon Atanasyan2017-09-075-81/+28
| | | | | | | | | | | | | This change converts the `MipsAsmBackend` constructor to the "standard" form. It makes possible to use `RegisterMCAsmBackend` for the backends registrations. Now we pass `Triple` instance to the `MipsAsmBackend` ctor and deduce all required options like endianness and bitness from the triple. We still need to implement explicit ABI checking for providing correct options to backends. Differential revision: https://reviews.llvm.org/D37519 llvm-svn: 312720
* [Sparc][NFC] Clean up SelectCC loweringAlex Bradbury2017-09-071-44/+40
| | | | | | | | | | | | | | The ARM, BPF, MSP430, Sparc and Mips backends all use a similar code sequence for lowering SelectCC. As pointed out by @reames in D29937, this code isn't particularly clear and in most of these backends doesn't actually match the comments. This patch makes the code sequence clearer for the Sparc backend through better variable naming and more accurate comments (e.g. we are inserting triangle control flow, _not_ diamond). There is no functional change. Differential Revision: https://reviews.llvm.org/D37194 llvm-svn: 312713
* X86: Improve AVX512 fptoui loweringZvi Rackover2017-09-073-0/+11
| | | | | | | | | | | | | | | | | Summary: Add patterns for fptoui <16 x float> to <16 x i8> fptoui <16 x float> to <16 x i16> Reviewers: igorb, delena, craig.topper Reviewed By: craig.topper Subscribers: llvm-commits Differential Revision: https://reviews.llvm.org/D37505 llvm-svn: 312704
* [X86] Force shuffle lowering to only create X86ISD::VPERM2X128 with 64-bit ↵Craig Topper2017-09-072-22/+5
| | | | | | | | | | element types so we can remove some patterns from isel. Intrinsic handling is still creating these nodes with 32-bit elements as well. But at least this gets rid of 8 and 16. Ideally, someday we'll convert the intrinsics to generic vector shuffles and remove the intrinsics. llvm-svn: 312702
* AMDGPU: Don't legalize i16 extloads to i32 with legal i16Matt Arsenault2017-09-073-1/+8
| | | | | | | Keeping non-i16 extloads makes it easier to match some new gfx9 load instructions. llvm-svn: 312699
* [X86] Remove patterns for selecting a v8f32 X86ISD::MOVSS or v4f64 ↵Craig Topper2017-09-072-48/+0
| | | | | | | | X86ISD::MOVSD. I don't think we ever generate these. If we did, I would expect we would also be able to generate v16f32 and v8f64, but we don't have those patterns. llvm-svn: 312694
* ARM: track globals promoted to coalesced const pool entriesSaleem Abdulrasool2017-09-073-13/+27
| | | | | | | | | | | | | Globals that are promoted to an ARM constant pool may alias with another existing constant pool entry. We need to keep a reference to all globals that were promoted to each constant pool value so that we can emit a distinct label for each promoted global. These labels are necessary so that debug info can refer to the promoted global without an undefined reference during linking. Patch by Stephen Crane! llvm-svn: 312692
* [AMDGPU] Use v_pk_max_f16 for fcanonicalizeStanislav Mekhanoshin2017-09-061-5/+10
| | | | | | Differential Revision: https://reviews.llvm.org/D37325 llvm-svn: 312676
* Insert IMPLICIT_DEFS for undef uses in tail mergingMatthias Braun2017-09-062-24/+20
| | | | | | | | | | | | | | | | | | | | | Tail merging can convert an undef use into a normal one when creating a common tail. Doing so can make the register live out from a block which previously contained the undef use. To keep the liveness up-to-date, insert IMPLICIT_DEFs in such blocks when necessary. To enable this patch the computeLiveIns() function which used to compute live-ins for a block and set them immediately is split into new functions: - computeLiveIns() just computes the live-ins in a LivePhysRegs set. - addLiveIns() applies the live-ins to a block live-in list. - computeAndAddLiveIns() is a convenience function combining the other two functions and behaving like computeLiveIns() before this patch. Based on a patch by Krzysztof Parzyszek <kparzysz@codeaurora.org> Differential Revision: https://reviews.llvm.org/D37034 llvm-svn: 312668
OpenPOWER on IntegriCloud