summaryrefslogtreecommitdiffstats
path: root/llvm/lib/Target
Commit message (Collapse)AuthorAgeFilesLines
* [WebAssembly] replaced .param/.result by .functypeWouter van Oortmerssen2018-11-196-139/+114
| | | | | | | | | | | | | | | | | | | | | Summary: This makes it easier/cleaner to generate a single signature from this directive. Also: - Adds the symbol name, such that we don't depend on the location of this directive anymore. - Actually constructs the signature in the assembler, and make the assembler own it. - Refactor the use of MVT vs ValType in the streamer and assembler to require less conversions overall. - Changed 700 or so tests to use it. Reviewers: sbc100, dschuff Subscribers: jgravelle-google, eraman, aheejin, sunfish, jfb, llvm-commits Differential Revision: https://reviews.llvm.org/D54652 llvm-svn: 347228
* [AMDGPU] Derive GCNSubtarget from MF to get overridden target featuresDavid Stuttard2018-11-191-2/+2
| | | | | | | | | | | | | | | | | | Summary: AMDGPUAsmPrinter has a getSTI function that derives a GCNSubtarget from the TM. However, this means that overridden target features are not detected and can result in incorrect behaviour. Switch to using STM which is a GCNSubtarget derived from the MF (used elsewhere in the same function). Change-Id: Ib6328ad667b7fcdc87e9c06344e59859207db9b0 Subscribers: arsenm, kzhuravl, jvesely, wdng, nhaehnle, yaxunl, tpr, t-tye, llvm-commits Differential Revision: https://reviews.llvm.org/D54301 llvm-svn: 347221
* Subject: [PATCH] [CodeGen] Add pass to combine interleaved loads.Martin Elshuber2018-11-191-1/+3
| | | | | | | | | | | | | | This patch defines an interleaved-load-combine pass. The pass searches for ShuffleVector instructions that represent interleaved loads. Matches are converted such that they will be captured by the InterleavedAccessPass. The pass extends LLVMs capabilities to use target specific instruction selection of interleaved load patterns (e.g.: ld4 on Aarch64 architectures). Differential Revision: https://reviews.llvm.org/D52653 llvm-svn: 347208
* AMDGPU/InsertWaitcnts: Some more const-correctnessNicolai Haehnle2018-11-191-4/+4
| | | | | | | | | | Reviewers: msearles, rampitec, scott.linder, kanarayan Subscribers: arsenm, kzhuravl, jvesely, wdng, yaxunl, dstuttard, tpr, t-tye, llvm-commits, hakzsam Differential Revision: https://reviews.llvm.org/D54225 llvm-svn: 347192
* [ARM] Remove trunc sinks in ARM CGPSam Parker2018-11-191-73/+133
| | | | | | | | | | | | | | | | | | | | | | | | | | Truncs are treated as sources if their produce a value of the same type as the one we currently trying to promote. Truncs used to be considered as a sink if their operand was the same value type. We now allow smaller types in the search, so we should search through truncs that produce a smaller value. These truncs can then be converted to an AND mask. This leaves sinks as being: - points where the value in the register is being observed, such as an icmp, switch or store. - points where value types have to match, such as calls and returns. - zext are included to ease the transformation and are generally removed later on. During this change, it also became apart from truncating sinks was broken: if a sink used a source, its type information had already been lost by the time the truncation happens. So I've changed the method of caching the type information. Differential Revision: https://reviews.llvm.org/D54515 llvm-svn: 347191
* [MSP430] Optimize srl/sra in case of A >> (8 + N)Anton Korobeynikov2018-11-191-2/+12
| | | | | | | | | | | There is no variable-length shifts on MSP430. Therefore "eat" 8 bits of shift via bswap & ext. Path by Kristina Bessonova! Differential Revision: https://reviews.llvm.org/D54623 llvm-svn: 347187
* [X86] Use a pcmpgt with 0 instead of psrad 31, to fill elements with the ↵Craig Topper2018-11-191-3/+3
| | | | | | | | sign bit in v4i32 MULH lowering. The shift requires a copy to avoid clobbering a register. Comparing with 0 uses an xor to produce 0 that will be overwritten with the compare results. So still requires 2 instructions, but should be one byte shorter since it doesn't need to encode an immediate. llvm-svn: 347185
* [X86] Use compare with 0 to fill an element with sign bits when sign ↵Craig Topper2018-11-191-4/+4
| | | | | | | | extending to v2i64 pre-sse4.1 Previously we used an arithmetic shift right by 31, but that requires a copy to preserve the input. So we might as well materialize a zero and compare to it since the comparison will overwrite the register that contains the zeros. This should be one byte shorter. llvm-svn: 347181
* [X86] Remove most of the SEXTLOAD Custom setOperationAction calls under ↵Craig Topper2018-11-191-7/+4
| | | | | | | | -x86-experimental-vector-widening-legalization. Leave just the v4i8->v4i64 and v8i8->v8i64, but only enable them on pre-sse4.1 targets when 64-bit mode is enabled. In those cases we end up creating sext loads that get scalarized to code that looks better than what we get from loading into a vector register and doing a multiple step sign extend using unpacks and shifts. llvm-svn: 347180
* [X86][SSE] Add SimplifyDemandedVectorElts support for SSE packed i2fp ↵Simon Pilgrim2018-11-181-0/+27
| | | | | | conversions. llvm-svn: 347177
* [X86] Add custom type legalization for extending v4i8/v4i16->v4i64.Craig Topper2018-11-181-8/+34
| | | | | | | | Pre-SSE4.1 sext_invec for v2i64 is complicated because we don't have a v2i64 sra instruction. So instead we sign extend to i32 using unpack and sra, then copy the elements and do a v4i32 sra to fill with sign bits, then interleave the i32 sign extend and the sign bits. So really we're doing to two sign extends but only using half of the v4i32 intermediate result. When the result is more than 128 bits, default type legalization would prefer to split the destination type all the way down to v2i64 with shuffles followed by v16i8/v8i16->v2i64 sext_inreg operations. This results in more instructions than necessary because we are only utilizing the lower 2 elements of the v4i32 intermediate result. Instead we can custom split a v4i8/v4i16->v4i64 sign_extend. Then we can sign extend v4i8/v4i16->v4i32 invec producing a full v4i32 result. Create the sign bit vector as a v4i32 then split and interleave with the sign bits using an punpackldq and punpackhdq. llvm-svn: 347176
* [X86][SSE] Add SimplifyDemandedVectorElts support for SSE splat-vector-shifts.Simon Pilgrim2018-11-181-0/+41
| | | | | | SSE vector shifts only use the bottom 64-bits of the shift amount vector. llvm-svn: 347173
* [X86] Disable combineToExtendVectorInReg under ↵Craig Topper2018-11-181-0/+74
| | | | | | | | | | | | | | -x86-experimental-vector-widening-legalization. Add custom type legalization for extends. If we widen illegal types instead of promoting, we should be able to rely on the type legalizer to create the vector_inreg operations for us with some caveats. This patch disables combineToExtendVectorInReg when we are using widening. I've enabled custom legalization for v8i8->v8i64 extends under avx512f since the type legalizer would want to create a vector_inreg with a v64i8 input type which isn't legal without avx512bw. So we go to v16i8 with custom code using the relaxation of rules we get from D54346. I've also enable custom legalization of v8i64 and v16i32 operations with with AVX. When the input type is 128 bits, the default splitting legalization would extend first 128->256, then do the a split to two 128 pieces. Extend each half to 256 and then concat the result. The custom legalization I've added instead uses a 128->256 bit vector_inreg extend that only reads the lower 64-bits for the low half of the split. Then shuffles the high 64-bits to the low 64-bits and does another vector_inreg extend. llvm-svn: 347172
* [X86] Lower v16i16->v8i16 truncate using an 'and' with 255, an ↵Craig Topper2018-11-181-0/+16
| | | | | | | | | | | | | | | | extract_subvector, and a packuswb instruction. Summary: This is an improvement over the two pshufbs and punpcklqdq we'd get otherwise. Reviewers: RKSimon, spatel Reviewed By: RKSimon Subscribers: llvm-commits Differential Revision: https://reviews.llvm.org/D54671 llvm-svn: 347171
* Remove unused variable. NFCI.Simon Pilgrim2018-11-181-8/+9
| | | | llvm-svn: 347169
* [X86][SSE] Split IsSplatValue into GetSplatValue and IsSplatVectorSimon Pilgrim2018-11-181-36/+40
| | | | | | | | Refactor towards making this recursive (necessary for PR38243 rotation splat detection). IsSplatVector returns the original vector source of the splat and the splat index. GetSplatValue returns the scalar splatted value as an extraction from IsSplatVector. llvm-svn: 347168
* [X86][SSE] Relax IsSplatValue - remove the 'variable shift' limit on subtracts.Simon Pilgrim2018-11-181-7/+6
| | | | | | Means we don't use the per-lane-shifts as much when we can cheaply use the older splat-variable-shifts. llvm-svn: 347162
* [X86][SSE] Use raw shuffle mask decode in ↵Simon Pilgrim2018-11-181-4/+10
| | | | | | | | SimplifyDemandedVectorEltsForTargetNode (PR39549) We were using the 'normalized' shuffle mask from resolveTargetShuffleInputs, which replaces zero/undef inputs with sentinel values. For SimplifyDemandedVectorElts we need the raw mask so we can correctly demand those 'zero' inputs that got normalized away, this requires an extra bit of logic to locally normalize undef inputs. llvm-svn: 347158
* [WebAssembly] Add null streamer supportHeejin Ahn2018-11-182-0/+23
| | | | | | | | | | | | Summary: Now `llc -filetype=null` works. Reviewers: eush Subscribers: dschuff, jgravelle-google, sbc100, sunfish, llvm-commits Differential Revision: https://reviews.llvm.org/D54660 llvm-svn: 347155
* [X86] Add -x86-experimental-vector-widening-legalization check to ↵Craig Topper2018-11-181-2/+5
| | | | | | | | combineSelect and combineSetCC to cover vXi16/vXi8 promotion without BWI. I don't yet have any test cases for this, but its the right thing to do based on log file inspection. llvm-svn: 347151
* [X86] Rename WidenMaskArithmetic->PromoteMaskArithmetic since we usually use ↵Craig Topper2018-11-181-4/+4
| | | | | | widen to refer to adding elements not making elements larger. NFC llvm-svn: 347150
* [X86] Don't use a pmaddwd for vXi32 multiply if the inputs are zero extends ↵Craig Topper2018-11-181-0/+10
| | | | | | | | from i8 or smaller without SSE4.1. Prefer to shrink the mul instead. The zero extend will require two stages of unpacks to implement. So its better to shrink the multiply using pmullw and then extend that result back to v4i32 using a single unpack. llvm-svn: 347149
* [X86] Add support for matching PACKUSWB from a v64i8 shuffle.Craig Topper2018-11-171-0/+5
| | | | llvm-svn: 347143
* [X86] Don't extend v32i8 multiplies to v32i16 with avx512bw and ↵Craig Topper2018-11-171-1/+1
| | | | | | prefer-vector-width=256. llvm-svn: 347131
* [X86] Use getUnpackl/getUnpackh instead of hardcoding a shuffle mask.Craig Topper2018-11-171-8/+5
| | | | llvm-svn: 347127
* Use llvm::copy. NFCFangrui Song2018-11-171-2/+2
| | | | llvm-svn: 347126
* [X86] Add custom promotion of narrow fp_to_uint/fp_to_sint operations under ↵Craig Topper2018-11-161-3/+48
| | | | | | | | | | -x86-experimental-vector-widening-legalization. This tries to force the result type to vXi32 followed by a truncate. This can help avoid scalarization that would otherwise occur. There's some annoying examples of an avx512 truncate instruction followed by a packus where we should really be able to just use one truncate. But overall this is still a net improvement. llvm-svn: 347105
* [X86] Qualify part of the masked gather handling in ReplaceNodeResults with ↵Craig Topper2018-11-161-21/+23
| | | | | | | | a getTypeAction call to know if we can use default legalization. If we managed to switch to -x86-experimental-vector-widening-legalization this block can be removed. llvm-svn: 347100
* [X86] Remove a branch on SSE4.1 from LowerLoadCraig Topper2018-11-161-14/+2
| | | | | | We should be able to use getExtendInVec with or without sse4.1 to produce a SIGN_EXTEND_VECTOR_INREG. llvm-svn: 347095
* [X86] In LowerLoad, fix assert messages and rename a variable that use Zize ↵Craig Topper2018-11-161-8/+8
| | | | | | instead of Size. NFC llvm-svn: 347093
* AArch64: Emit a call frame instruction for the shadow call stack register.Peter Collingbourne2018-11-161-6/+25
| | | | | | | | | | When unwinding past a function that uses shadow call stack, we must subtract 8 from the value of the x18 register. This patch causes us to emit a call frame instruction that causes that to happen. Differential Revision: https://reviews.llvm.org/D54609 llvm-svn: 347089
* [MSP430] Add RTLIB::[SRL/SRA/SHL]_I32 lowering to EABI lib callsAnton Korobeynikov2018-11-161-2/+7
| | | | | | | | Patch by Kristina Bessonova! Differential Revision: https://reviews.llvm.org/D54626 llvm-svn: 347080
* [X86] Disable Condbr_merge passRong Xu2018-11-161-1/+1
| | | | | | | Disable Condbr_merge pass for now due to PR39658. Will reenable the pass once the bug is fixed. llvm-svn: 347079
* Revert "[PowerPC] Make no-PIC default to match GCC - LLVM"Stefan Pintilie2018-11-161-1/+5
| | | | | | This reverts commit r347069 llvm-svn: 347076
* [MSP430] Use R_MSP430_16_BYTE type for FK_Data_2 fixupAnton Korobeynikov2018-11-161-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | Linker fails to link example like this (simplified case from newlib sources): $ cat test.c extern const char _ctype_b[]; struct _t { char *ptr; }; struct _t T = { ((char *) _ctype_b + 3) }; $ cat ctype.c char _ctype_b[4] = { 0, 0, 0, 0 }; LD: test.o:(.data+0x0): warning: internal error: unsupported relocation error We also follow gnu toolchain here, where 2-byte relocation mapped to R_MSP430_16_BYTE, instead of R_MSP430_16. Patch by Kristina Bessonova! Differential Revision: https://reviews.llvm.org/D54620 llvm-svn: 347074
* [WebAssembly] Default to static reloc modelSam Clegg2018-11-161-2/+6
| | | | | | Differential Revision: https://reviews.llvm.org/D54637 llvm-svn: 347073
* [PowerPC] Make no-PIC default to match GCC - LLVMStefan Pintilie2018-11-161-5/+1
| | | | | | | | Set -fno-PIC as the default option. Differential Revision: https://reviews.llvm.org/D53383 llvm-svn: 347069
* [SelectionDAG] Move (repeated) SDTIntShiftDOp double shift node def to ↵Simon Pilgrim2018-11-162-7/+0
| | | | | | | | common code. NFCI. Prep work for PR39467. llvm-svn: 347067
* [X86][SSE] Move number of input limit out of resolveTargetShuffleInputs. Simon Pilgrim2018-11-161-3/+5
| | | | | | Only combineX86ShufflesRecursively needs this limit. llvm-svn: 347054
* [X86] X86DAGToDAGISel::matchBitExtract(): extract 'lshr' from `X`Roman Lebedev2018-11-161-1/+19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: As discussed in previous review, and noted in the FIXME, if `X` is actually an `lshr Y, Z` (logical!), we can fold the `Z` into 'control`, and let the `BEXTR` do this too. We could just insert those 8 bits of shift amount into control, but it is better to instead zero-extend them, and 'or' them in place. We can only do this for `lshr`, not `ashr`, because we do not know that the mask cover only the bits of `Y`, and not any of the sign-extended bits. The obvious question is, is this actually legal to do? I believe it is. Relevant quotes, from `Intel® 64 and IA-32 Architectures Software Developer’s Manual`, `BEXTR — Bit Field Extract`: * `Bit 7:0 of the second source operand specifies the starting bit position of bit extraction.` * `A START value exceeding the operand size will not extract any bits from the second source operand.` * `Only bit positions up to (OperandSize -1) of the first source operand are extracted.` * `All higher order bits in the destination operand (starting at bit position LENGTH) are zeroed.` * `The destination register is cleared if no bits are extracted.` FIXME: if we can do this, i wonder if we should prefer `BEXTR` over `BZHI` in such cases. Reviewers: RKSimon, craig.topper, spatel, andreadb Reviewed By: RKSimon, craig.topper, andreadb Subscribers: llvm-commits Differential Revision: https://reviews.llvm.org/D54095 llvm-svn: 347048
* [RISCV][NFC] Define and use the new CA instruction formatAlex Bradbury2018-11-164-19/+31
| | | | | | | | | | | | The RISC-V ISA manual was updated on 2018-11-07 (commit 00557c3) to define a new compressed instruction format, RVC format CA (no actual instruction encodings were changed). This patch updates the RISC-V backend to define the new format, and to use it in the relevant instructions. Differential Revision: https://reviews.llvm.org/D54302 Patch by Luís Marques. llvm-svn: 347043
* [RISCV] Constant materialisation for RV64IAlex Bradbury2018-11-161-1/+28
| | | | | | | | | | | | | | | | | | | | | | | This commit introduces support for materialising 64-bit constants for RV64I, making use of the RISCVMatInt::generateInstSeq helper in order to share logic for immediate materialisation with the MC layer (where it's used for the li pseudoinstruction). test/CodeGen/RISCV/imm.ll is updated to test RV64, and gains new 64-bit constant tests. It would be preferable if anyext constant returns were sign rather than zero extended (see PR39092). This patch simply adds an explicit signext to the returns in imm.ll. Further optimisations for constant materialisation are possible, most notably for mask-like values which can be generated my loading -1 and shifting right. A future patch will standardise on the C++ codepath for immediate selection on RV32 as well as RV64, and then add further such optimisations to RISCVMatInt::generateInstSeq in order to benefit both RV32 and RV64 for codegen and li expansion. Differential Revision: https://reviews.llvm.org/D52962 llvm-svn: 347042
* [MSP430] Add support for .refsym directiveAnton Korobeynikov2018-11-161-0/+13
| | | | | | | | | | | | | | | Introduces support for '.refsym' assembler directive. From GCC docs (for MSP430): '.refsym' - This directive instructs assembler to add an undefined reference to the symbol following the directive. No relocation is created for this symbol; it will exist purely for pulling in object files from archives. Patch by Kristina Bessonova! Differential Revision: https://reviews.llvm.org/D54618 llvm-svn: 347041
* [X86] Add custom type legalization for v2i8/v4i8/v8i8 mul under ↵Craig Topper2018-11-161-2/+20
| | | | | | | | -x86-experimental-vector-widening. By early promoting the multiply to use an i16 element type we can avoid op legalization emit a second multiply for the 8 upper elements of the v16i8 type we would otherwise get. llvm-svn: 347032
* AMDGPU: Fix analyzeBranch failing with pseudoterminatorsMatt Arsenault2018-11-163-3/+31
| | | | | | | | | If a block had one of the _term instructions used for gluing exec modifying instructions to the end of the block, analyzeBranch would fail, preventing the verifier from catching a broken successor list. llvm-svn: 347027
* [X86] Use ANY_EXTEND instead of SIGN_EXTEND in the AVX2 and later path for ↵Craig Topper2018-11-161-2/+2
| | | | | | | | | | legalizing vXi8 multiply. We aren't going to use the upper bits of the multiply result that the extend would effect. So we don't need a specific type of extend. This makes some reduction test cases shorter because we were previously trying to sign_extend a truncate which we can't eliminate. llvm-svn: 347011
* [X86] Update a couple comments to remove a mention of a sign extending that ↵Craig Topper2018-11-161-3/+3
| | | | | | no longer happens. NFC llvm-svn: 347010
* [AMDGPU] Add FixupVectorISel pass, currently Supports SREGs in GLOBAL LD/STRon Lieberman2018-11-168-4/+263
| | | | | | | | | Add a pass to fixup various vector ISel issues. Currently we handle converting GLOBAL_{LOAD|STORE}_* and GLOBAL_Atomic_* instructions into their _SADDR variants. This involves feeding the sreg into the saddr field of the new instruction. llvm-svn: 347008
* [WebAssembly] Split BBs after throw instructionsHeejin Ahn2018-11-161-14/+44
| | | | | | | | | | | | | | | | | | | | | | | | | | Summary: `throw` instruction is a terminator in wasm, but BBs were not splitted after `throw` instructions, causing machine instruction verifier to fail. This patch - Splits BBs after `throw` instructions in WasmEHPrepare and adding an unreachable instruction after `throw`, which will be deleted in LateEHPrepare pass - Refactors WasmEHPrepare into two member functions - Changes the semantics of `eraseBBsAndChildren` in LateEHPrepare pass to match that of WasmEHPrepare pass, which is newly added. Now `eraseBBsAndChildren` does not delete BBs with remaining predecessors. - Fixes style nits, making static function names conform to clang-tidy - Re-enables the test temporarily disabled by rL346840 && rL346845 Reviewers: dschuff Subscribers: sbc100, jgravelle-google, sunfish, llvm-commits Differential Revision: https://reviews.llvm.org/D54571 llvm-svn: 347003
* [AMDGPU] NFC Test commitRon Lieberman2018-11-161-1/+1
| | | | llvm-svn: 347002
OpenPOWER on IntegriCloud