summaryrefslogtreecommitdiffstats
path: root/llvm/lib/Target
Commit message (Collapse)AuthorAgeFilesLines
...
* [MIPS GlobalISel] Select float constantsPetar Avramovic2019-03-283-4/+71
| | | | | | | | Select 32 and 64 bit float constants for MIPS32. Differential Revision: https://reviews.llvm.org/D59933 llvm-svn: 357183
* [x86] avoid cmov in movmsk reductionSanjay Patel2019-03-281-19/+21
| | | | | | | | | | | | | | | | | | | | | | This is probably the least important of our movmsk problems, but I'm starting at the bottom to reduce distractions. We were creating a select_cc which bypasses the select and bitmask codegen optimizations that we have now. If we produce a compare+negate instead, we allow things like neg/sbb carry bit hacks, and in all cases we avoid a cmov. There's no partial register update danger in these sequences because we always produce the zero-register xor ahead of the 'set' if needed. There seems to be a missing fold for sext of a bool bit here: negl %ecx movslq %ecx, %rax ...but that's an independent transform. Differential Revision: https://reviews.llvm.org/D59818 llvm-svn: 357172
* [X86MacroFusion] Handle branch fusion (AMD CPUs).Clement Courbet2019-03-285-56/+112
| | | | | | | | | | | | | | | | | | Summary: This adds a BranchFusion feature to replace the usage of the MacroFusion for AMD CPUs. See D59688 for context. Reviewers: andreadb, lebedev.ri Subscribers: hiraditya, jdoerfert, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D59872 llvm-svn: 357171
* AMDGPU: Make exec mask optimzations more resistant to block splitsMatt Arsenault2019-03-283-22/+84
| | | | | | | Also improve the check for SALU instructions to also ignore implicit_def and other fake instructions. llvm-svn: 357170
* [X86] AMD Piledriver (BdVer2): fine-tune some latenciesRoman Lebedev2019-03-281-28/+50
| | | | | | | | | | | | | | Based on llvm-exegesis measurements. Now that llvm-exegesis is ~2 magnitudes faster, and is a bit smarter, it is now possible to continue cleanup of the scheduler model. With this, there are no more latency inconsistencies for the opcodes that produce stable measurements, and only a few inconsistencies for unstable measurements (MMX_* opcodes, opcodes that llvm-exegesis measures by chaining - CMP, TEST, BT, SETcc, CVT, MOV, etc.) llvm-svn: 357169
* [NFC] Format InlineFeatureIgnoreList.Clement Courbet2019-03-281-51/+52
| | | | | | To avoid more spurious clang-format changes when adding features (D59872). llvm-svn: 357168
* [X85][AVX] Add missing vXi16 broadcast fold patternsSimon Pilgrim2019-03-282-0/+24
| | | | | | | | Now that D59484 has landed its easier to add these. Added missing AVX512BW v32i16 equivalents while I was at it. llvm-svn: 357155
* [ARM GlobalISel] Fix G_STORE with s1Diana Picus2019-03-281-0/+18
| | | | | | | G_STORE for 1-bit values uses a STRBi12, which stores the whole byte. Zero out the undefined bits before writing. llvm-svn: 357154
* [ARM GlobalISel] Fix selection of G_SELECTDiana Picus2019-03-281-5/+3
| | | | | | | | | | | G_SELECT uses a 1-bit scalar for the condition, and is currently implemented with a plain CMPri against 0. This means that values such as 0x1110 are interpreted as true, when instead the higher bits should be treated as undefined and therefore ignored. Replace the CMPri with a TSTri against 0x1, which performs an implicit AND, yielding the expected result. llvm-svn: 357153
* [WebAssembly] Rename wasm fixup kindsSam Clegg2019-03-284-14/+12
| | | | | | | | | | | These fixup kinds are not explicitly related to the code section. They are there to signal how to apply the fixup. Also, a couple of other minor wasm cleanups. Differential Revision: https://reviews.llvm.org/D59908 llvm-svn: 357145
* [ARM] Remove dead function ARMMCCodeEmitter::getSOImmOpValueSam Clegg2019-03-271-34/+0
| | | | | | | | | The last reference to this function was removed from the ARM td files in 2015 in rL225266. Differential Revision: https://reviews.llvm.org/D59868 llvm-svn: 357130
* [x86] improve AVX lowering of vector zextSanjay Patel2019-03-271-0/+20
| | | | | | | | | | | | | | | | | | | | | If we know the 2 halves of an oversized zext-in-reg are the same, don't create those halves independently. I tried several different approaches to fold this, but it's difficult to get right during legalization. In the default path, we are creating a generic shuffle that looks like an unpack high, but it can get transformed into a different mask (a blend), so it's not straightforward to match that. If we try to fold after it actually becomes an X86ISD::UNPCKH node, we can't be sure what the operand node is - it might be a generic shuffle, or it could be some x86-specific op. From the test output, we should be doing something like this for SSE4.1 as well, but I'd rather leave that as a follow-up since it involves changing lowering actions. Differential Revision: https://reviews.llvm.org/D59777 llvm-svn: 357129
* [x86] look through bitcast operand of MOVMSKSanjay Patel2019-03-271-6/+5
| | | | | | | | | This is not exactly NFC because it should make further combines of MOVMSK easier to match, but there should be no outward differences because we have isel patterns in place specifically to allow this. See: // Also support integer VTs to avoid a int->fp bitcast in the DAG. llvm-svn: 357128
* [X86ISelDAGToDAG] Move initialization of OptForSize and OptForMinSize from ↵Craig Topper2019-03-271-5/+7
| | | | | | | | PreprocessISelDAG to runOnMachineFunction. NFCI This makes more sense as a place to initialize these. I don't think runOnMachineFunction was overriden when these cached values were originally created. llvm-svn: 357123
* [LegalizeVectorTypes] Allow single loads and stores for more short vectorsJustin Bogner2019-03-271-6/+8
| | | | | | | | | | | | | | | | | | | When lowering a load or store for TypeWidenVector, the type legalizer would use a single load or store if the associated integer type was legal or promoted. E.g. it loads a v4i8 as an i32 if i32 is legal/promotable. (See https://reviews.llvm.org/rL236528 for reference.) This applies that behaviour to vector types. If the vector type is TypePromoteInteger, the element type is going to be TypePromoteInteger as well, which will lead to have a single promoting load rather than N individual promoting loads. For instance, if we have a v3i1, we would now have a load of v4i1 instead of 3 loads of i1. Patch by Guillaume Marques. Thanks! Differential Revision: https://reviews.llvm.org/D56201 llvm-svn: 357120
* [WebAssembly] Add some whitespace to WebAssemblyFixIrreducibleControlFlowAlon Zakai2019-03-271-0/+2
| | | | | | | Differential Revision: https://reviews.llvm.org/D59855 modified: llvm/lib/Target/WebAssembly/WebAssemblyFixIrreducibleControlFlow.cpp llvm-svn: 357117
* [ARM] Don't confuse the scheduler for very large VLDMDIA etc.Eli Friedman2019-03-271-1/+6
| | | | | | | | | | | | | ARMBaseInstrInfo::getNumLDMAddresses is making bad assumptions about the memory operands of load and store-multiple operations. This doesn't really fix the problem properly, but it's enough to prevent crashing, at least. Fixes https://bugs.llvm.org/show_bug.cgi?id=41231 . Differential Revision: https://reviews.llvm.org/D59834 llvm-svn: 357109
* [AArch64][GlobalISel] Make G_PHI of v2s64, v4s32, v2s32 legal.Amara Emerson2019-03-271-1/+1
| | | | llvm-svn: 357108
* Reapply "AMDGPU: Scavenge register instead of findUnusedReg"Matt Arsenault2019-03-271-1/+1
| | | | | | | | | | | | | This reapplies r356149, using the correct overload of findUnusedReg which passes the current iterator. This worked most of the time, because the scavenger iterator was moved at the end of the frame index loop in PEI. This would fail if the spill was the first instruction. This was further hidden by the fact that the scavenger wasn't passed in for normal frame index elimination. llvm-svn: 357098
* [X86] Add post-isel pseudos for rotate by immediate using SHLD/SHRDCraig Topper2019-03-272-10/+36
| | | | | | | | | | | | | | Haswell CPUs have special support for SHLD/SHRD with the same register for both sources. Such an instruction will go to the rotate/shift unit on port 0 or 6. This gives it 1 cycle latency and 0.5 cycle reciprocal throughput. When the register is not the same, it becomes a 3 cycle operation on port 1. Sandybridge and Ivybridge always have 1 cyc latency and 0.5 cycle reciprocal throughput for any SHLD. When FastSHLDRotate feature flag is set, we try to use SHLD for rotate by immediate unless BMI2 is enabled. But MachineCopyPropagation can look through a copy and change one of the sources to be different. This will break the hardware optimization. This patch adds psuedo instruction to hide the second source input until after register allocation and MachineCopyPropagation. I'm not sure if this is the best way to do this or if there's some other way we can make this work. Fixes PR41055 Differential Revision: https://reviews.llvm.org/D59391 llvm-svn: 357096
* [AArch64][SVE] Asm: error on unexpected SVE vector register type suffixSander de Smalen2019-03-271-3/+2
| | | | | | | | | | | | | | | | | | | | | | This patch fixes an assembler bug that allowed SVE vector registers to contain a type suffix when not expected. The SVE unpredicated movprfx instruction is the only instruction affected. The following are examples of what was previously valid: movprfx z0.b, z0.b movprfx z0.b, z0.s movprfx z0, z0.s These instructions are now erroneous. Patch by Cullen Rhodes (c-rhodes) Reviewed By: sdesmalen Differential Revision: https://reviews.llvm.org/D59636 llvm-svn: 357094
* AMDGPU: Enable the scavenger for large framesMatt Arsenault2019-03-271-5/+14
| | | | | | | Another test is needed for the case where the scavenge fail, but there's another issue with that which needs an additional fix. llvm-svn: 357093
* AMDGPU: Add additional MIR tests for exec mask optimizationsMatt Arsenault2019-03-271-3/+11
| | | | | | | | | | Also includes one example of how this transform is unsound. This isn't verifying the copies are used in the control flow intrinisic patterns. Also add option to disable exec mask opt pass. Since this pass is unsound, it may be useful to turn it off until it is fixed. llvm-svn: 357091
* AMDGPU: Skip debug_instr when collapsing end_cfMatt Arsenault2019-03-271-3/+8
| | | | | | | Based on how these are inserted, I doubt this was causing a problem in practice. llvm-svn: 357090
* AMDGPU: Fix missing scc implicit def on s_andn2_b64_termMatt Arsenault2019-03-271-18/+13
| | | | | | | Introduce new helper class to copy properties directly from the base instruction. llvm-svn: 357089
* AMDGPU: Don't hardcode num defs for MUBUF instructionsMatt Arsenault2019-03-271-2/+2
| | | | | | | This shouldn't change anything since the no-ret atomics are selected later. llvm-svn: 357084
* AMDGPU: wave_barrier is not isBarrierMatt Arsenault2019-03-271-1/+0
| | | | | | | This is not a control flow instruction, so should not be marked as isBarrier. This fixes a verifier error if followed by unreachable. llvm-svn: 357081
* [BPF] use std::map to ensure consistent outputYonghong Song2019-03-271-3/+3
| | | | | | | | | | | | | | | | | | | | | | | | | The .BTF.ext FuncInfoTable and LineInfoTable contain information organized per ELF section. Current definition of FuncInfoTable/LineInfoTable is: std::unordered_map<uint32_t, std::vector<BTFFuncInfo>> FuncInfoTable std::unordered_map<uint32_t, std::vector<BTFLineInfo>> LineInfoTable where the key is the section name off in the string table. The unordered_map may cause the order of section output different for different platforms. The same for unordered map definition of std::unordered_map<std::string, std::unique_ptr<BTFKindDataSec>> DataSecEntries where BTF_KIND_DATASEC entries may have different ordering for different platforms. This patch fixed the issue by using std::map. Test static-var-derived-type.ll is modified to generate two DataSec's which will ensure the ordering is the same for all supported platforms. Signed-off-by: Yonghong Song <yhs@fb.com> llvm-svn: 357077
* AMDGPU: Fix areLoadsFromSameBasePtr for DS atomicsMatt Arsenault2019-03-271-4/+11
| | | | | | The offset operand index is different for atomics. llvm-svn: 357073
* Revert of 357063 [AMDGPU][MC] Corrected handling of tied src for atomic ↵Dmitry Preobrazhensky2019-03-271-7/+7
| | | | | | | return MUBUF opcodes Reason: the change was mistakenly committed before review llvm-svn: 357066
* [AArch64] NFC: Cleanup isAArch64FrameOffsetLegalSander de Smalen2019-03-272-202/+109
| | | | | | | | | | | | | | | | Cleanup isAArch64FrameOffsetLegal by: - Merging the large switch statement to reuse AArch64InstrInfo::getMemOpInfo(). - Using AArch64InstrInfo::getUnscaledLdSt() to determine whether an instruction has an unscaled variant. - Simplifying the logic that calculates the offset to fit the immediate. Reviewers: paquette, evandro, eli.friedman, efriedma Reviewed By: efriedma Differential Revision: https://reviews.llvm.org/D59636 llvm-svn: 357064
* [AMDGPU][MC] Corrected handling of tied src for atomic return MUBUF opcodesDmitry Preobrazhensky2019-03-271-7/+7
| | | | | | | | | | See bug 40917: https://bugs.llvm.org/show_bug.cgi?id=40917 Reviewers: artem.tamazov, arsenm Differential Revision: https://reviews.llvm.org/D59305 llvm-svn: 357063
* [AArch64] Adds cases for LDRSHWui and LDRSHXui to getMemOpInfoSander de Smalen2019-03-271-0/+6
| | | | | | | This patch also adds cases PRFUMi and PRFMui. This change was discussed in https://reviews.llvm.org/D59635. llvm-svn: 357059
* Revert rL356864 : [X86][SSE41] Start shuffle combining from ↵Simon Pilgrim2019-03-271-33/+28
| | | | | | | | | | | | ZERO_EXTEND_VECTOR_INREG (PR40685) Enable SSE41 ZERO_EXTEND_VECTOR_INREG shuffle combines - for the PMOVZX(PSHUFD(V)) -> UNPCKH(V,0) pattern we reduce the shuffles (port5-bottleneck on Intel) at the expense of creating a zero (pxor v,v) and an extra register move - which is a good trade off as these are pretty cheap and in most cases it doesn't increase register pressure. This also exposed a missed opportunity to use combine to ZERO_EXTEND_VECTOR_INREG with folded loads - even if we're in the float domain. ........ Causes PR41249 llvm-svn: 357057
* [X86] When iselling (x << C1) and/or/xor C2 as (x and/or/xor (C2>>C1)) << ↵Craig Topper2019-03-271-40/+9
| | | | | | | | | | C1, go through the isel table instead of manually selecting. Previously we manually selected the AND/OR/XOR with immediate and the SHL(or ADD if the shift is 1). But this was missing out on the opportunity to use a 64 bit AND with a 32-bit immediate and possibly other isel tricks we have built into the tables. Instead, insert the new nodes into the DAG using insertDAGNode and allow them each to be selected through the normal table. llvm-svn: 357049
* [NFC][PowerPC] Custom PowerPC specific machine-schedulerQingShan Zhang2019-03-277-1/+128
| | | | | | | | | | This patch lays the groundwork for extending the generic machine scheduler by providing a PPC-specific implementation. There are no functional changes as this is an incremental patch that simply provides the necessary overrides which just encapsulate the behavior of the generic scheduler. Subsequent patches will add specific behavior. Differential Revision: https://reviews.llvm.org/D59284 llvm-svn: 357047
* [X86] Simplify some code in matchBitExtract by using ANY_EXTEND.Craig Topper2019-03-271-10/+2
| | | | | | | We were manually outputting the code we would get from selecting ANY_EXTEND. We can save some code by just letting an ANY_EXTEND go through isel on its own. llvm-svn: 357045
* [PPC] Refactor PPCBranchSelector.cppGuozhi Wei2019-03-261-136/+177
| | | | | | | | | | This patch splits the huge function PPCBranchSelector.cpp:runOnMachineFunction into several smaller functions. No functional change. Differential Revision: https://reviews.llvm.org/D59623 llvm-svn: 357033
* [PowerPC] Remove UseVSXRegStefan Pintilie2019-03-264-110/+86
| | | | | | | | | | The UseVSXReg flag can be safely removed and the code cleaned up. Patch By: Yi-Hong Liu Differential Revision: https://reviews.llvm.org/D58685 llvm-svn: 357028
* [WebAssembly] Initial implementation of PIC code generationSam Clegg2019-03-269-49/+126
| | | | | | | | | | | | | | | | | | | | | | | | This change implements lowering of references global symbols in PIC mode. This change implements lowering of global references in PIC mode using a new @GOT reference type. @GOT references can be used with function or data symbol names combined with the get_global instruction. In this case the linker will insert the wasm global that stores the address of the symbol (either in memory for data symbols or in the wasm table for function symbols). For now I'm continuing to use the R_WASM_GLOBAL_INDEX_LEB relocation type for this type of reference which means that this relocation type can refer to either a global or a function or data symbol. We could choose to introduce specific relocation types for GOT entries in the future. See the current dynamic linking proposal: https://github.com/WebAssembly/tool-conventions/blob/master/DynamicLinking.md Differential Revision: https://reviews.llvm.org/D54647 llvm-svn: 357022
* [WebAssembly] Don't analyze branches after CFGStackifyHeejin Ahn2019-03-262-21/+27
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: `WebAssembly::analyzeBranch` now does not analyze anything if the function is CFG stackified. We were previously doing similar things by checking if a branch's operand is whether an integer or an MBB, but this failed to bail out when a BB did not have any terminators. Consider this case: ``` bb0: try $label0 call @foo // unwinds to %ehpad bb1: ... br $label0 // jumps to %cont. can be deleted ehpad: catch ... cont: end_try ``` Here `br $label0` will be deleted in CFGStackify's `removeUnnecessaryInstrs` function, because we jump to the %cont block even without the branch. But in this case, MachineVerifier fails to verify this, because `ehpad` is not a successor of `bb1` even if `bb1` does not have any terminators. MachineVerifier incorrectly thinks `bb1` falls through to the next block. This pass now consistently rejects all analysis after CFGStackify whether a BB has terminators or not, also making the MachineVerifier work. (MachineVerifier does not try to verify relationships between BBs if `analyzeBranch` fails, the behavior we want after CFGStackify.) This also adds a new option `-wasm-disable-ehpad-sort` for testing. This option helps create the sorted order we want to test, and without the fix in this patch, the tests in cfg-stackify-eh.ll fail at MachineVerifier with `-wasm-disable-ehpad-sort`. Reviewers: dschuff Subscribers: sunfish, sbc100, jgravelle-google, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D59740 llvm-svn: 357015
* [WebAssembly] Add CFGStacikfied field to WebAssemblyFunctionInfoHeejin Ahn2019-03-263-3/+17
| | | | | | | | | | | | | | | | Summary: This adds `CFGStackified` field and its serialization to WebAssemblyFunctionInfo. Reviewers: dschuff Subscribers: sunfish, sbc100, jgravelle-google, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D59747 llvm-svn: 357011
* [WebAssembly] Support WebAssemblyFunctionInfo serializationHeejin Ahn2019-03-264-0/+63
| | | | | | | | | | | | | | | | | | | | Summary: The framework for supporting target-specific MachineFunctionInfo was added in r356215. This adds serialization support for WebAssemblyFunctionInfo on top of that. This patch only adds the framework and does not actually serialize anything at this point; we have to add YAML mapping later for the fields in WebAssemblyFunctionInfo we want to serialize if necessary. Reviewers: dschuff, arsenm Subscribers: sunfish, wdng, sbc100, jgravelle-google, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D59737 llvm-svn: 357009
* [WebAssembly] Fix a bug when mixing TRY/LOOP markersHeejin Ahn2019-03-261-1/+3
| | | | | | | | | | | | | | | | | | | Summary: When TRY and LOOP markers are in the same BB and END_TRY and END_LOOP markers are in the same BB, END_TRY should be _before_ END_LOOP, because LOOP is always before TRY if they are in the same BB. (TRY is placed in the latest possible position, whereas LOOP is in the earliest possible position.) Reviewers: dschuff Subscribers: sunfish, sbc100, jgravelle-google, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D59751 llvm-svn: 357008
* [WebAssembly] Fix bugs in BLOCK/TRY placementHeejin Ahn2019-03-261-39/+43
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: Before we placed all TRY/END_TRY markers before placing BLOCK/END_BLOCK markers. This couldn't handle this case: ``` bb0: br bb2 bb1: // nearest common dominator of bb3 and bb4 br_if ... bb3 br bb4 bb2: ... bb3: call @foo // unwinds to ehpad bb4: call @bar // unwinds to ehpad ehpad: catch ... ``` When we placed TRY markers, we placed it in bb1 because it is the nearest common dominator of bb3 and bb4. But because bb0 jumps to bb2, when we placed block markers, we ended up with interleaved scopes like ``` block try end_block catch end_try ``` which was not correct. This patch fixes the bug by placing BLOCK and TRY markers in one pass while iterating BBs in a function. This also adds some more routines to `placeTryMarkers`, because we now have to assume that there can be previously placed BLOCK and END_BLOCK. Reviewers: dschuff Subscribers: sunfish, sbc100, jgravelle-google, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D59739 llvm-svn: 357007
* [SystemZ] Remove LRMux pseudo instruction.Jonas Paulsson2019-03-261-3/+0
| | | | | | | This instruction is unused and not needed. Review: Ulrich Weigand. llvm-svn: 356997
* [RISCV] Improve codegen for icmp {ne,eq} with a constantLuis Marques2019-03-261-0/+4
| | | | | | | | | Adds two patterns to improve the codegen of GPR value comparisons with small constants. Instead of first loading the constant into another register and then doing an XOR of those registers, these patterns directly use the constant as an XORI immediate. llvm-svn: 356990
* [ARM][Asm] Accept upper case coprocessor number and registersOliver Stannard2019-03-261-2/+2
| | | | | | Differential revision: https://reviews.llvm.org/D59760 llvm-svn: 356984
* [X86] In matchBitExtract, place all of the new nodes before Node's position ↵Craig Topper2019-03-261-10/+9
| | | | | | | | | | in the DAG for the topological sort. We were using OrigNBits, but that put all the nodes before the node we used to start the control computation. This caused some node earlier than the sequence we inserted to be selected before the sequence we created. We want our new sequence to be selected first since it depends on OrigNBits. I don't have a test case. Found by reviewing the code. llvm-svn: 356979
* [X86] In matchBitExtract, if we need to truncate the BEXTR make sure we put ↵Craig Topper2019-03-261-1/+1
| | | | | | | | the BEXTR at Node's position in the DAG for the topological sort. We were using OrigNBits, but that doesn't guarantee that it will be selected before the nodes that make up X. llvm-svn: 356978
OpenPOWER on IntegriCloud