summaryrefslogtreecommitdiffstats
path: root/llvm/lib
Commit message (Collapse)AuthorAgeFilesLines
...
* [X86] Change vXi8 MULHU lowering to unpack high and low half of lanes ↵Craig Topper2018-11-301-54/+47
| | | | | | | | instead of extracting and concating low and high half registers. This reduces the number of shuffle operations that need to be done. The splitting strategy requires the shuffle unit for the extraction and the extension. With the unpack strategy the unpacks accomplish a splitting and extending in one operation. llvm-svn: 348019
* [X86] Prefer lowerVectorShuffleAsBitMask over using a avx512 masked ↵Craig Topper2018-11-301-5/+5
| | | | | | | | operation when avx512bw/avx512vl is enabled. This does require a constant pool load instead of loading an immediate into a gpr, moving to a k register and masking. But its less instructions and more consistent with previous ISAs. It probably opens up more combine opportunities as one of the test cases demonstrates. llvm-svn: 348018
* [SelectionDAG] fold FP binops with 2 undef operands to undefSanjay Patel2018-11-301-2/+4
| | | | llvm-svn: 348016
* [AMDGPU] Disable SReg Global LD/ST, perf regressionRon Lieberman2018-11-301-0/+7
| | | | | | Differential Revision: https://reviews.llvm.org/D55093 llvm-svn: 348014
* Revert "[BTF] Add BTF DebugInfo"Yonghong Song2018-11-308-1117/+2
| | | | | | This reverts commit 9c6b970db8bc63b28ce58a129bb1580a6a3c6caf. llvm-svn: 348004
* [BTF] Add BTF DebugInfoYonghong Song2018-11-308-2/+1117
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch adds BPF Debug Format (BTF) as a standalone LLVM debuginfo. The BTF related sections are directly generated from IR. The BTF debuginfo is generated only when the compilation target is BPF. What is BTF? ============ First, the BPF is a linux kernel virtual machine and widely used for tracing, networking and security. https://www.kernel.org/doc/Documentation/networking/filter.txt https://cilium.readthedocs.io/en/v1.2/bpf/ BTF is the debug info format for BPF, introduced in the below linux patch https://github.com/torvalds/linux/commit/69b693f0aefa0ed521e8bd02260523b5ae446ad7#diff-06fb1c8825f653d7e539058b72c83332 in the patch set mentioned in the below lwn article. https://lwn.net/Articles/752047/ The BTF format is specified in the above github commit. In summary, its layout looks like struct btf_header type subsection (a list of types) string subsection (a list of strings) With such information, the kernel and the user space is able to pretty print a particular bpf map key/value. One possible example below: Withtout BTF: key: [ 0x01, 0x01, 0x00, 0x00 ] With BTF: key: struct t { a : 1; b : 1; c : 0} where struct is defined as struct t { char a; char b; short c; }; How BTF is generated? ===================== Currently, the BTF is generated through pahole. https://git.kernel.org/pub/scm/devel/pahole/pahole.git/commit/?id=68645f7facc2eb69d0aeb2dd7d2f0cac0feb4d69 and available in pahole v1.12 https://git.kernel.org/pub/scm/devel/pahole/pahole.git/commit/?id=4a21c5c8db0fcd2a279d067ecfb731596de822d4 Basically, the bpf program needs to be compiled with -g with dwarf sections generated. The pahole is enhanced such that a .BTF section can be generated based on dwarf. This format of the .BTF section matches the format expected by the kernel, so a bpf loader can just take the .BTF section and load it into the kernel. https://github.com/torvalds/linux/commit/8a138aed4a807ceb143882fb23a423d524dcdb35 The .BTF section layout is also specified in this patch: with file include/llvm/BinaryFormat/BTF.h. What use cases this patch tries to address? =========================================== Currently, only the bpf instruction stream is required to pass to the kernel. The kernel verifies it, jits it if configured to do so, attaches it to a particular kernel attachment point, and later executes when a particular event happens. This patch tries to expand BTF to support two more use cases below: (1). BPF supports subroutine calls. During performance analysis, it would be good to differentiate which call is hot instead of just providing a virtual address. This would require to pass a unique identifier for each subroutine to the kernel, the subroutine name is a natual choice. (2). If a particular jitted instruction is hot, we want user to know which source line this jitted instruction belongs to. This would require the source information is available to various profiling tools. Note that in a single ELF file, . there may be multiple loadable bpf programs, . for a particular to-be-loaded bpf instruction stream, its instructions may come from multiple PROGBITS sections, the bpf loader needs to merge them together to a single consecutive insn stream before loading to the kernel. For example: section .text: subroutines funcFoo section _progA: calling funcFoo section _progB: calling funcFoo The bpf loader could construct two loadable bpf instruction streams and load them into the kernel: . _progA funcFoo . _progB funcFoo So per ELF section function offset and instruction offset will need to be adjusted before passing to the kernel, and the kernel essentially expect only one code section regardless of how many in the ELF file. What do we propose and Why? =========================== To support the above two use cases, we propose to add an additional section, .BTF.ext, to the ELF file which is the input of the bpf loader. A different section is preferred since loader may need to manipulate it before loading part of its data to the kernel. The .BTF.ext section has a similar header to the .BTF section and it contains two subsections for func_info and line_info. . the func_info maps the func insn byte offset to a func type in the .BTF type subsection. . the line_info maps the insn byte offset to a line info. . both func_info and line_info subsections are organized by ELF PROGBITS AX sections. pahole is not a good place to implement .BTF.ext as pahole is mostly for structure hole information and more importantly, we want to pass the actual code to the kernel. . bpf program typically is small so storage overhead should be small. . in bpf land, it is totally possible that an application loads the bpf program into the kernel and then that application quits, so holding debug info by the user space application is not practical as you may not even know who loads this bpf program. . having source codes directly kept by kernel would ease deployment since the original source code does not need ship on every hosts and kernel-devel package does not need to be deployed even if kernel headers are used. LLVM is a good place to implement. . The only reliable time to get the source code is during compilation time. This will result in both more accurate information and easier deployment as stated in the above. . Another consideration is for JIT. The project like bcc (https://github.com/iovisor/bcc) use MCJIT to compile a C program into bpf insns and load them to the kernel. The llvm generated BTF sections will be readily available for such cases as well. Design and implementation of emiting .BTF/.BTF.ext sections =========================================================== The BTF debuginfo format is defined. Both .BTF and .BTF.ext sections are generated directly from IR when both "-target bpf" and "-g" are specified. Note that dwarf sections are still generated as dwarf is used by user space tools like llvm-objdump etc. for BPF target. This patch also contains tests to verify generated .BTF and .BTF.ext sections for all supported types, func_info and line_info subsections. The patch is also tested against linux kernel bpf sample tests and selftests. Signed-off-by: Yonghong Song <yhs@fb.com> Differential Revision: https://reviews.llvm.org/D53736 llvm-svn: 347999
* [CodeGen] Prefer static frame index for STATEPOINT liveness argsThan McIntosh2018-11-301-1/+10
| | | | | | | | | | | | | | | | | | | | | | Summary: If a given liveness arg of STATEPOINT is at a fixed frame index (e.g. a function argument passed on stack), prefer to use this fixed location even the address is also in a register. If we use the register it will generate a spill, which is not necessary since the fixed frame index can be directly recorded in the stack map. Patch by Cherry Zhang <cherryyz@google.com>. Reviewers: thanm, niravd, reames Reviewed By: reames Subscribers: cherryyz, reames, anna, arphaman, llvm-commits Differential Revision: https://reviews.llvm.org/D53889 llvm-svn: 347998
* [SLP]PR39774: Update references of the replaced external instructions.Alexey Bataev2018-11-301-0/+2
| | | | | | | | | | | | | | | Summary: An additional fix for PR39774. Need to update the references for the RedcutionRoot instruction when it is replaced during the vectorization phase to avoid compiler crash on reduction vectorization. Reviewers: RKSimon, spatel Subscribers: llvm-commits Differential Revision: https://reviews.llvm.org/D55017 llvm-svn: 347997
* [AMDGPU] Combine DPP mov with use instructions (VOP1/2/3)Valery Pykhtin2018-11-3012-50/+711
| | | | | | | | Introduces DPP pseudo instructions and the pass that combines DPP mov with subsequent uses. Differential revision: https://reviews.llvm.org/D53762 llvm-svn: 347993
* TableGen/ISel: Allow PatFrag predicate code to access captured operandsNicolai Haehnle2018-11-301-0/+12
| | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: This simplifies writing predicates for pattern fragments that are automatically re-associated or commuted. For example, a followup patch adds patterns for fragments of the form (add (shl $x, $y), $z) to the AMDGPU backend. Such patterns are automatically commuted to (add $z, (shl $x, $y)), which makes it basically impossible to refer to $x, $y, and $z generically in the PredicateCode. With this change, the PredicateCode can refer to $x, $y, and $z simply as `Operands[i]`. Test confirmed that there are no changes to any of the generated files when building all (non-experimental) targets. Change-Id: I61c00ace7eed42c1d4edc4c5351174b56b77a79c Reviewers: arsenm, rampitec, RKSimon, craig.topper, hfinkel, uweigand Subscribers: wdng, tpr, llvm-commits Differential Revision: https://reviews.llvm.org/D51994 llvm-svn: 347992
* [RISCV] Add additional CSR instruction aliases (imm. operands)Alex Bradbury2018-11-301-0/+10
| | | | | | | | | | | | | | This patch adds CSR instructions aliases for the cases where the instruction takes an immediate operand but the alias doesn't have the i suffix. This is necessary for gas/gcc compatibility. gas doesn't do a similar conversion for fsflags or fsrm, so this should be complete. Differential Revision: https://reviews.llvm.org/D55008 Patch by Luís Marques. llvm-svn: 347991
* Fix parenthesis warning in IVDescriptorsRenato Golin2018-11-301-3/+3
| | | | llvm-svn: 347990
* Add a new reduction pattern matchRenato Golin2018-11-301-6/+66
| | | | | | | | | | | | | | | | | | | | | | | | | | | Adding a new reduction pattern match for vectorizing code similar to TSVC s3111: for (int i = 0; i < N; i++) if (a[i] > b) sum += a[i]; This patch adds support for fadd, fsub and fmull, as well as multiple branches and different (but compatible) instructions (ex. add+sub) in different branches. The difference from the previous patch(https://reviews.llvm.org/D49168) is as follows: - Added check of fast-math property of fp-instruction to the previous patch - Fix/add some pattern for if-reduction.ll Differential Revision: https://reviews.llvm.org/D54464 Patch by Takahiro Miyoshi <takahiro.miyoshi@linaro.org> and Masakazu Ueno <masakazu.ueno@linaro.org> llvm-svn: 347989
* [RISCV] Add UNIMP instruction (32- and 16-bit forms)Alex Bradbury2018-11-302-0/+17
| | | | | | | | | | | | | | | | | This patch adds support for UNIMP in both 32- and 16-bit forms. The 32-bit form can be seen as a variant of the ECALL/EBREAK/etc. family of instructions. The 16-bit form is just all zeroes, which isn't a valid RISC-V instruction, but still follows the 16-bit instruction form (i.e. bits 0-1 != 11). Until recently unimp was undocumented and supported just by binutils, which printed unimp for either the 16 or 32-bit form. Both forms are now documented <https://github.com/riscv/riscv-asm-manual/pull/20> and binutils now supports c.unimp <https://sourceware.org/ml/binutils-cvs/2018-11/msg00179.html>. Differential Revision: https://reviews.llvm.org/D54316 Patch by Luís Marques. llvm-svn: 347988
* [SelectionDAG] Support result type promotion for FLT_ROUNDS_Alex Bradbury2018-11-302-0/+10
| | | | | | | | | For targets where i32 is not a legal type (e.g. 64-bit RISC-V), LegalizeIntegerTypes must promote the result of ISD::FLT_ROUNDS_. Differential Revision: https://reviews.llvm.org/D53820 llvm-svn: 347986
* [SelectionDAG] Support promotion of PREFETCH operandsAlex Bradbury2018-11-302-0/+15
| | | | | | | | | For targets where i32 is not a legal type (e.g. 64-bit RISC-V), LegalizeIntegerTypes must promote the operands of ISD::PREFETCH. Differential Revision: https://reviews.llvm.org/D53281 llvm-svn: 347980
* [LoopSimplifyCFG] Update MemorySSA in terminator folding. PR39783Max Kazantsev2018-11-301-6/+13
| | | | | | | | | | | | | | | | Terminator folding transform lacks MemorySSA update for memory Phis, while they exist within MemorySSA analysis. They need exactly the same type of updates as regular Phis. Failing to update them properly ends up with inconsistent MemorySSA and manifests in various assertion failures. This patch adds Memory Phi updates to this transform. Thanks to @jonpa for finding this! Differential Revision: https://reviews.llvm.org/D55050 Reviewed By: asbirlea llvm-svn: 347979
* [SelectionDAG] Support promotion of FRAMEADDR/RETURNADDR operandsAlex Bradbury2018-11-302-0/+10
| | | | | | | | | For targets where i32 is not a legal type (e.g. 64-bit RISC-V), LegalizeIntegerTypes must promote the operand. Differential Revision: https://reviews.llvm.org/D53279 llvm-svn: 347978
* [TargetLowering][RISCV] Introduce isSExtCheaperThanZExt hook and implement ↵Alex Bradbury2018-11-304-10/+27
| | | | | | | | | | | | | | | for RISC-V DAGTypeLegalizer::PromoteSetCCOperands currently prefers to zero-extend operands when it is able to do so. For some targets this is more expensive than a sign-extension, which is also a valid choice. Introduce the isSExtCheaperThanZExt hook and use it in the new SExtOrZExtPromotedInteger helper. On RISC-V, we prefer sign-extension for FromTy == MVT::i32 and ToTy == MVT::i64, as it can be performed using a single instruction. Differential Revision: https://reviews.llvm.org/D52978 llvm-svn: 347977
* [RISCV] Introduce codegen patterns for instructions introduced in RV64IAlex Bradbury2018-11-302-1/+99
| | | | | | | | | | | | | | | | | | | | | | | | | | | As discussed in the RFC <http://lists.llvm.org/pipermail/llvm-dev/2018-October/126690.html>, 64-bit RISC-V has i64 as the only legal integer type. This patch introduces patterns to support codegen of the new instructions introduced in RV64I: addiw, addiw, subw, sllw, slliw, srlw, srliw, sraw, sraiw, ld, sd. Custom selection code is needed for srliw as SimplifyDemandedBits will remove lower bits from the mask, meaning the obvious pattern won't work: def : Pat<(sext_inreg (srl (and GPR:$rs1, 0xffffffff), uimm5:$shamt), i32), (SRLIW GPR:$rs1, uimm5:$shamt)>; This is sufficient to compile and execute all of the GCC torture suite for RV64I other than those files using frameaddr or returnaddr intrinsics (LegalizeDAG doesn't know how to promote the operands - a future patch addresses this). When promoting i32 sltu/sltiu operands, it would be more efficient to use sign-extension rather than zero-extension for RV64. A future patch adds a hook to allow this. Differential Revision: https://reviews.llvm.org/D52977 llvm-svn: 347973
* [X86] Emit PACKUS directly from the v16i8 LowerMULH code instead of using a ↵Craig Topper2018-11-301-6/+1
| | | | | | shuffle. llvm-svn: 347967
* [X86] Change the pre-sse4.1 code in the v16i8 MULHU lowering to be what we ↵Craig Topper2018-11-301-13/+18
| | | | | | | | get after DAG combine cleans it up. Previously we emitted a punpcklbw/punpckhbw to move the byte elements into the upper half of 16 bit elements then shifted right by 8 to zero the upper bits. After DAG combine we end up with punpcklbw/punpckhbw into the lower bits with zeros in the uppers bits and no shifts. So just emit that directly. llvm-svn: 347966
* [ARM] Don't expand sdiv when optimising for minsizeSjoerd Meijer2018-11-302-0/+47
| | | | | | | | | | | | | | | Don't expand SDIV with an immediate that is a power of 2 if we optimise for minimum code size. For example: sdiv %1, i32 4 gets expanded to a sequence of 3 instructions, but this is suboptimal for minimum code size so instead we just generate a MOV and a SDIV if integer division is supported. Differential Revision: https://reviews.llvm.org/D54546 llvm-svn: 347965
* [CodeGen] Fix bugs in BranchFolderPass when debug labels are generated.Hsiangkai Wang2018-11-301-5/+5
| | | | | | | | | | | Skip DBG_VALUE and DBG_LABEL in branch folding algorithms. The bug is reported in https://bugs.chromium.org/p/chromium/issues/detail?id=898160. Differential Revision: https://reviews.llvm.org/D54199 llvm-svn: 347964
* [NFC] Refine doxygen format.Hsiangkai Wang2018-11-301-57/+64
| | | | | | Differential Revision: https://reviews.llvm.org/D54568 llvm-svn: 347963
* [SystemZ::TTI] i8/i16 operands extension costs revisitedJonas Paulsson2018-11-301-20/+16
| | | | | | | | | | | | | | | | | | Three minor changes to these extra costs: * For ICmp instructions, instead of adding 2 all the time for extending each operand, this is only done if that operand is neither a load or an immediate. * The operands extension costs for divides removed, because we now use a high cost already for the divide (20). * The costs for lhsr/ashr extra costs removed as this did not seem useful. Review: Ulrich Weigand https://reviews.llvm.org/D55053 llvm-svn: 347961
* [X86] Fix a couple types in SimplifyDemandedVectorEltsForTargetNode. NFCICraig Topper2018-11-301-3/+3
| | | | | | We had a EVT variable capturing the result of getSimpleValueType which returns an MVT. Another place using EVT that could have been MVT. And an 'int' that should be 'unsigned'. llvm-svn: 347959
* Fix build warnings introduced in rL347938Mircea Trofin2018-11-301-1/+3
| | | | | | | | | | | | Summary: Suppressed warnings in release builds due to variable used only in assert statement. Subscribers: llvm-commits, eraman, mgorny Differential Revision: https://reviews.llvm.org/D55100 llvm-svn: 347939
* Revert "Revert r347596 "Support for inserting profile-directed cache ↵Mircea Trofin2018-11-306-1/+397
| | | | | | | | | | | | | | | | | | | prefetches"" Summary: This reverts commit d8517b96dfbd42e6a8db33c50d1fa1e58e63fbb9. Fix: correct the use of DenseMap. Reviewers: davidxl, hans, wmi Reviewed By: wmi Subscribers: mgorny, eraman, llvm-commits Differential Revision: https://reviews.llvm.org/D55088 llvm-svn: 347938
* [SCEV] Guard movement of insertion point for loop-invariantsWarren Ristow2018-11-301-41/+42
| | | | | | | | | | | | | | | r320789 suppressed moving the insertion point of SCEV expressions with dev/rem operations to the loop header in non-loop-invariant situations. This, and similar, hoisting is also unsafe in the loop-invariant case, since there may be a guard against a zero denominator. This is an adjustment to the fix of r320789 to suppress the movement even in the loop-invariant case. This fixes PR30806. Differential Revision: https://reviews.llvm.org/D54713 llvm-svn: 347934
* [WebAssembly] Expand unavailable integer operations for vectorsThomas Lively2018-11-291-6/+14
| | | | | | | | | | | | | | | | | | | Summary: Expands for vector types all of the integer operations that are expanded for scalars because they are not supported at all by WebAssembly. This CL has no tests because such tests would really be testing the target-independent expansion, but I'm happy to add tests if reviewers think it would be helpful. Reviewers: aheejin, dschuff Subscribers: sbc100, jgravelle-google, sunfish, llvm-commits Differential Revision: https://reviews.llvm.org/D55010 llvm-svn: 347923
* Produce an error on non-encodable offsets for darwin ARM scattered relocations.Jonas Devlieghere2018-11-291-0/+20
| | | | | | | | | | | | | | Scattered ARM relocations for Mach-O's only have 24 bits available to encode the offset. This is not checked but just truncated and can result in corrupt binaries after linking because the relocations are applied to the wrong offset. This patch will check and error out in those situations instead of emitting a wrong relocation. Patch by: Sander Bogaert (dzn) Differential revision: https://reviews.llvm.org/D54776 llvm-svn: 347922
* Comment tweak requested in code review. NFCPaul Robinson2018-11-291-1/+2
| | | | | | I forgot to do this before committing D54755. llvm-svn: 347918
* [DAGCombiner] narrow truncated binopsSanjay Patel2018-11-291-0/+22
| | | | | | | | | | | | | | | | | | | The motivating case for this is shown in: https://bugs.llvm.org/show_bug.cgi?id=32023 and the corresponding rot16.ll regression tests. Because x86 scalar shift amounts are i8 values, we can end up with trunc-binop-trunc sequences that don't get folded in IR. As the TODO comments suggest, there will be regressions if we extend this (for x86, we mostly seem to be missing LEA opportunities, but there are likely vector folds missing too). I think those should be considered existing bugs because this is the same transform that we do as an IR canonicalization in instcombine. We just need more tests to make those visible independent of this patch. Differential Revision: https://reviews.llvm.org/D54640 llvm-svn: 347917
* [RISCV] Implement codegen for cmpxchg on RV32IAAlex Bradbury2018-11-295-3/+210
| | | | | | | | | | | | | Utilise a similar ('late') lowering strategy to D47882. The changes to AtomicExpandPass allow this strategy to be utilised by other targets which implement shouldExpandAtomicCmpXchgInIR. All cmpxchg are lowered as 'strong' currently and failure ordering is ignored. This is conservative but correct. Differential Revision: https://reviews.llvm.org/D48131 llvm-svn: 347914
* [X86] Change the pre-type legalization DAG combine added in r347898 into a ↵Craig Topper2018-11-291-49/+33
| | | | | | | | custom type legalization operation instead. This seems to produce the same results on the tests we have. llvm-svn: 347912
* Revert r347871 "Fix: Add support for TFE/LWE in image intrinsic"David Stuttard2018-11-2913-563/+57
| | | | | | | | | Also revert fix r347876 One of the buildbots was reporting a failure in some relevant tests that I can't repro or explain at present, so reverting until I can isolate. llvm-svn: 347911
* Introduce MaxUsesToExplore argument to capture trackingArtur Pilipenko2018-11-291-11/+9
| | | | | | | | | | | | | | | | | Currently CaptureTracker gives up if it encounters a value with more than 20 uses. The motivation for this cap is to keep it relatively cheap for BasicAliasAnalysis use case, where the results can't be cached. Although, other clients of CaptureTracker might be ok with higher cost. This patch introduces an argument for PointerMayBeCaptured functions to specify the max number of uses to explore. The motivation for this change is a downstream user of CaptureTracker, but I believe upstream clients of CaptureTracker might also benefit from more fine grained cap. Reviewed By: hfinkel Differential Revision: https://reviews.llvm.org/D55042 llvm-svn: 347910
* [MachineScheduler] Order FI-based memops based on stack directionFrancis Visoiu Mistrih2018-11-292-9/+26
| | | | | | | | It makes more sense to order FI-based memops in descending order when the stack goes down. This allows offsets to stay "consecutive" and allow easier pattern matching. llvm-svn: 347906
* [SelectionDAG][AArch64][X86] Move legalization of vector MULHS/MULHU from ↵Craig Topper2018-11-294-85/+40
| | | | | | | | | | | | LegalizeDAG to LegalizeVectorOps I believe we should be legalizing these with the rest of vector binary operations. If any custom lowering is required for these nodes, this will give the DAG combine between LegalizeVectorOps and LegalizeDAG to run on the custom code before constant build_vectors are lowered in LegalizeDAG. I've moved MULHU/MULHS handling in AArch64 from Lowering to isel. Moving the lowering earlier caused build_vector+extract_subvector simplifications to kick in which made the generated code worse. Differential Revision: https://reviews.llvm.org/D54276 llvm-svn: 347902
* [X86] Add a DAG combine pre type legalization to widen division by constant ↵Craig Topper2018-11-291-0/+39
| | | | | | | | | | | | splat on narrow vectors to avoid scalarization This is another patch for -x86-experimental-vector-widening. This pre widens narrow division by constants so that we can get pass the legal type check in the generic DAG combiner. Otherwise we end up scalarizing. I've restricted this to splats for now because it was easy to just call DAG.getConstant. Not sure what we should do for non-splat? Increase the element size?Widen the constant vector by padding with 1? Differential Revision: https://reviews.llvm.org/D54919 llvm-svn: 347898
* [InstSimplify] fold select with implied conditionSanjay Patel2018-11-292-18/+39
| | | | | | | | | | | | | | | | | | | | | | | This is an almost direct move of the functionality from InstCombine to InstSimplify. There's no reason not to do this in InstSimplify because we never create a new value with this transform. (There's a question of whether any dominance-based transform belongs in either of these passes, but that's a separate issue.) I've changed 1 of the conditions for the fold (1 of the blocks for the branch must be the block we started with) into an assert because I'm not sure how that could ever be false. We need 1 extra check to make sure that the instruction itself is in a basic block because passes other than InstCombine may be using InstSimplify as an analysis on values that are not wired up yet. The 3-way compare changes show that InstCombine has some kind of phase-ordering hole. Otherwise, we would have already gotten the intended final result that we now show here. llvm-svn: 347896
* [LICM] Reapply r347776 "Make LICM able to hoist phis" with fixJohn Brawn2018-11-291-15/+314
| | | | | | | | | | This commit caused a large compile-time slowdown in some cases when NDEBUG is off due to the dominator tree verification it added. Fix this by only doing dominator tree and loop info verification when something has been hoisted. Differential Revision: https://reviews.llvm.org/D52827 llvm-svn: 347889
* [ThinLTO] Import local variables from the same module as callerTeresa Johnson2018-11-292-12/+39
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: We can sometimes end up with multiple copies of a local variable that have the same GUID in the index. This happens when there are local variables with the same name that are in different source files having the same name/path at compile time (but compiled into different bitcode objects). In this case make sure we import the copy in the caller's module. This enables importing both of the variables having the same GUID (but which will have different promoted names since the module paths, and therefore the module hashes, will be distinct). Importing the wrong copy is particularly problematic for read only variables, since we must import them as a local copy whenever referenced. Otherwise we get undefs at link time. Note that the llvm-lto.cpp and ThinLTOCodeGenerator changes are needed for testing the distributed index case via clang, which will be sent as a separate clang-side patch shortly. We were previously not doing the dead code/read only computation before computing imports when testing distributed index generation (like it was for testing importing and other ThinLTO mechanisms alone). Reviewers: evgeny777 Subscribers: mehdi_amini, inglorion, eraman, steven_wu, dexonsmith, dang, llvm-commits Differential Revision: https://reviews.llvm.org/D55047 llvm-svn: 347886
* [AMDGPU] Add and update scalar instructionsGraham Sellers2018-11-294-34/+214
| | | | | | | | | This patch adds support for S_ANDN2, S_ORN2 32-bit and 64-bit instructions and adds splits to move them to the vector unit (for which there is no equivalent instruction). It modifies the way that the more complex scalar instructions are lowered to vector instructions by first breaking them down to sequences of simpler scalar instructions which are then lowered through the existing code paths. The pattern for S_XNOR has also been updated to apply inversion to one input rather than the output of the XOR as the result is equivalent and may allow leaving the NOT instruction on the scalar unit. A new tests for NAND, NOR, ANDN2 and ORN2 have been added, and existing tests now hit the new instructions (and have been modified accordingly). Differential: https://reviews.llvm.org/D54714 llvm-svn: 347877
* Fix: Add support for TFE/LWE in image intrinsicDavid Stuttard2018-11-291-2/+1
| | | | | | | | My change svn-id: 347871 caused a buildbot failure due to an unused variable def (used in an assert). Change-Id: Ia882d18bb6fa79b4d7bbfda422b9ea5d23eab336 llvm-svn: 347876
* Revert r347823 "[TextAPI] Switch back to a custom Platform enum."Hans Wennborg2018-11-2913-1396/+0
| | | | | | | | | It broke the Windows buildbots, e.g. http://lab.llvm.org:8011/builders/llvm-clang-lld-x86_64-scei-ps4-windows10pro-fast/builds/21829/steps/test/logs/stdio This also reverts the follow-ups: r347824, r347827, and r347836. llvm-svn: 347874
* [CallSiteSplitting] Report edge deletion to DomTreeUpdaterJoseph Tremoulet2018-11-291-1/+3
| | | | | | | | | | | | | | | | | | | Summary: When splitting musttail calls, the split blocks' original terminators get removed; inform the DTU when this happens. Also add a testcase that fails an assertion in the DTU without this fix. Reviewers: fhahn, junbuml Reviewed By: fhahn Subscribers: llvm-commits Differential Revision: https://reviews.llvm.org/D55027 llvm-svn: 347872
* Add support for TFE/LWE in image intrinsicsDavid Stuttard2018-11-2913-57/+564
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | TFE and LWE support requires extra result registers that are written in the event of a failure in order to detect that failure case. The specific use-case that initiated these changes is sparse texture support. This means that if image intrinsics are used with either option turned on, the programmer must ensure that the return type can contain all of the expected results. This can result in redundant registers since the vector size must be a power-of-2. This change takes roughly 6 parts: 1. Modify the instruction defs in tablegen to add new instruction variants that can accomodate the extra return values. 2. Updates to lowerImage in SIISelLowering.cpp to accomodate setting TFE or LWE (where the bulk of the work for these instruction types is now done) 3. Extra verification code to catch cases where intrinsics have been used but insufficient return registers are used. 4. Modification to the adjustWritemask optimisation to account for TFE/LWE being enabled (requires extra registers to be maintained for error return value). 5. An extra pass to zero initialize the error value return - this is because if the error does not occur, the register is not written and thus must be zeroed before use. Also added a new (on by default) option to ensure ALL return values are zero-initialized that is required for sparse texture support. 6. Disable the inst_combine optimization in the presence of tfe/lwe (later TODO for this to re-enable and handle correctly). There's an additional fix now to avoid a dmask=0 For an image intrinsic with tfe where all result channels except tfe were unused, I was getting an image instruction with dmask=0 and only a single vgpr result for tfe. That is incorrect because the hardware assumes there is at least one vgpr result, plus the one for tfe. Fixed by forcing dmask to 1, which gives the desired two vgpr result with tfe in the second one. The TFE or LWE result is returned from the intrinsics using an aggregate type. Look in the test code provided to see how this works, but in essence IR code to invoke the intrinsic looks as follows: %v = call {<4 x float>,i32} @llvm.amdgcn.image.load.1d.v4f32i32.i32(i32 15, i32 %s, <8 x i32> %rsrc, i32 1, i32 0) %v.vec = extractvalue {<4 x float>, i32} %v, 0 %v.err = extractvalue {<4 x float>, i32} %v, 1 Differential revision: https://reviews.llvm.org/D48826 Change-Id: If222bc03642e76cf98059a6bef5d5bffeda38dda llvm-svn: 347871
* [CVP] tidy processCmp(); NFCSanjay Patel2018-11-291-14/+14
| | | | | | | | 1. The variables were confusing: 'C' typically refers to a constant, but here it was the Cmp. 2. Formatting violations. 3. Simplify code to return true/false constant. llvm-svn: 347868
OpenPOWER on IntegriCloud